OpenAI Backs Bill That Would Limit Liability for AI-Enabled Mass Deaths or Financial Disasters
Source: Wired
OpenAI is throwing its support behind an Illinois state bill that would shield AI labs from liability in cases where AI models are used to cause serious societal harms, such as death or serious injury of 100 or more people or at least $1 billion in property damage.
The effort seems to mark a shift in OpenAIs legislative strategy. Until now, OpenAI has largely played defense, opposing bills that could have made AI labs liable for their technologys harms. Several AI policy experts tell WIRED that SB 3444which could set a new standard for the industryis a more extreme measure than bills OpenAI has supported in the past.
The bill, SB 3444, would shield frontier AI developers from liability for critical harms caused by their frontier models as long as they did not intentionally or recklessly cause such an incident, and have published safety, security, and transparency reports on their website. It defines frontier model as any AI model trained using more than $100 million in computational costs, which likely could apply to Americas largest AI labs like OpenAI, Google, xAI, Anthropic, and Meta.
-snip-
Under its definition of critical harms, the bill lists a few common areas of concern for the AI industry, such as a bad actor using AI to create a chemical, biological, radiological, or nuclear weapon. If an AI model engages in conduct on its own that, if committed by a human, would constitute a criminal offense and leads to those extreme outcomes, that would also be a critical harm. If an AI model were to commit any of these actions under SB 3444, the AI lab behind the model may not be held liable, so long as it wasnt intentional and they published their reports.
-snip-
Read more: https://www.wired.com/story/openai-backs-bill-exempt-ai-firms-model-harm-lawsuits/
Although OpenAI has been arguing against state laws affecting AI, they want this one.
The article quotes Scott Wisor, policy director for the Secure AI project, saying they'd already polled people in Illinois and 90% of those they polled do NOT want AI companies exempt from liability.
SheltieLover
(80,818 posts)ms liberty
(11,264 posts)in2herbs
(4,432 posts)highplainsdem
(62,375 posts)cbabe
(6,673 posts)Google and chatbot startup Character.AI are settling lawsuits over teen suicides
Shubhangi Goel
Senior Reporter, Tech
Jan 7, 2026, 9:48 PM PT
Google and Character.AI have agreed to settle lawsuits over chatbot-linked teen suicides. The cases allege that AI chatbots contributed to mental health crises among teenagers.
OpenAI and Meta have been involved in similar chatbot safety lawsuits and probes. Google and chatbot-building startup Character.AI have agreed to settle multiple lawsuits from families whose teenagers died by suicide or hurt themselves after interacting with Character.AI's chatbots.
more
(Me thinks they protest to much)
FakeNoose
(41,791 posts)... even though their "customers" will all be dead by then.
Faux pas
(16,407 posts)Cheezoholic
(3,750 posts)dlk
(13,270 posts)Despite their bluster and posturing, Republicans have no actual respect for human life.
Marie Marie
(11,349 posts)WTF is this crap? Must be nice to run the world with guarantees of no consequences ever, for anything. Oh yeah, just ask Trump.
LudwigPastorius
(14,784 posts)Err...sorry. Progress requires sacrifice.

GiqueCee
(4,349 posts)... irredeemably obscene bill! These AI people are soulless sociopaths. AI needs water-tight restrictions, NOT freedom from any accountability for its misuse. Pritzker had better be ready to veto this abomination.