New GOP bill would protect AI companies from lawsuits if they offer transparency
Source: NBC
Sen. Cynthia Lummis, R-Wyo., is introducing legislation Thursday that would shield artificial intelligence developers from an array of civil liability lawsuits provided they meet certain disclosure requirements.
Lummis bill, the Responsible Innovation and Safe Expertise Act, seeks to clarify that doctors, lawyers, financial advisers, engineers and other professionals who use AI programs in their decision-making retain legal liability for any errors they make so long as AI developers publicly disclose how their systems work.
This legislation doesnt create blanket immunity for AI in fact, it requires AI developers to publicly disclose model specifications so professionals can make informed decisions about the AI tools they choose to utilize, Lummis, a member of the Commerce Committee, said in a statement first shared with NBC News. It also means that licensed professionals are ultimately responsible for the advice and decisions they make. This is smart policy for the digital age that protects innovation, demands transparency, and puts professionals and their clients first.
Lummis office touted the bill as the first piece of federal legislation that offers clear guidelines for AI liability in a professional context. The measure would not govern liability for other AI elements, such as self-driving vehicles, and it would not provide immunity when AI developers act recklessly or willfully engage in misconduct.
-snip-
Read more: https://www.nbcnews.com/tech/tech-news/cynthia-lummis-ai-civil-liability-protections-developers-transparency-rcna212131
This bill is a giveaway to the AI companies, protecting them from being sued for the mistakes their hallucinating tools will inevitably make.
The transparency from the AI companies that's been demanded in earlier court cases is transparency on training data, so artists and others whose intellectual property was stolen can demand recompense. The AI.companies have refused to be transparent about that.
The type of transparency this new GOP bill will require is just that "developers publicly disclose how their systems work.". That could be as simple as explaining that LLMs inevitably hallucinate, so every result has to be checked.
Of course the AI companies' main sales pitch is that their AI tools save time, and people trying to save time are less likely to check carefully and catch all the mistakes.
But the AI companies don't want to be sued for their tools' failures, and this bill, if passed, will accomplish that.

SheltieLover
(69,698 posts)
Karasu
(1,346 posts)Last edited Fri Jun 13, 2025, 01:43 PM - Edit history (1)
anything.
kimbutgar
(25,313 posts)perhaps an offshore hidden account ?