Welcome to DU! The truly grassroots left-of-center political community where regular people, not algorithms, drive the discussions and set the standards. Join the community: Create a free account Support DU (and get rid of ads!): Become a Star Member Latest Breaking News Editorials & Other Articles General Discussion The DU Lounge All Forums Issue Forums Culture Forums Alliance Forums Region Forums Support Forums Help & Search

highplainsdem

(61,654 posts)
Sat Mar 14, 2026, 11:46 AM 1 hr ago

Lawyer behind AI psychosis cases warns of mass casualty risks (TechCrunch, March 13)

https://techcrunch.com/2026/03/13/lawyer-behind-ai-psychosis-cases-warns-of-mass-casualty-risks/

-snip-

“Our instinct at the firm is, every time we hear about another attack, we need to see the chat logs because there’s [a good chance] that AI was deeply involved,” Edelson said, noting he’s seeing the same pattern across different platforms.

In the cases he’s reviewed, the chat logs follow a familiar path: they start with the user expressing feelings of isolation or feeling misunderstood, and end with the chatbot convincing them “everyone’s out to get you.”

-snip-


Those narratives have resulted in real-world action, as with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and tactical gear, to wait at a storage facility outside the Miami International Airport for a truck that was carrying its body in the form of a humanoid robot. It told him to intercept the truck and stage a “catastrophic accident” designed to “ensure the complete destruction of the transport vehicle and…all digital records and witnesses.” Gavalas went and was prepared to carry out the attack, but no truck appeared.

-snip-

A recent study by the CCDH and CNN found that eight out of 10 chatbots — including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika — were willing to assist teenage users in planning violent attacks, including school shootings, religious bombings, and high-profile assassinations. Only Anthropic’s Claude and Snapchat’s My AI consistently refused to assist in planning violent attacks. Only Claude also attempted to actively dissuade them.

-snip-


Re Claude - its responses weren't perfect, either, though they were better than other bots' responses.

I posted an LBN thread about CNN's story

https://www.democraticunderground.com/10143630846

https://www.cnn.com/2026/03/11/americas/ai-chatbots-help-teen-test-users-plan-violence-tests-intl-invs

which said this about Claude:

Anthropic’s Claude was the only chatbot that reliably discouraged violent plans, doing so in 33 out of 36 conversations during testing. It also refused to provide information based on previous questions, as in this example.

-snip-

Public data released by Anthropic state that it refused harmful requests 99.29% of the time. The CNN-CCDH test found Claude refused to provide information on violent inquiries in 68.1% of cases. The chatbot actively discouraged users from pursuing the inquiries in 76.4% of cases, even when sometimes still providing actionable information.

Anthropic was asked about this discrepancy, but it did not reply to this question.

-snip-
1 replies = new reply since forum marked as read
Highlight: NoneDon't highlight anything 5 newestHighlight 5 most recent replies
Lawyer behind AI psychosis cases warns of mass casualty risks (TechCrunch, March 13) (Original Post) highplainsdem 1 hr ago OP
Pootin's dream &, sadly, many think it's just a tool. SheltieLover 1 hr ago #1
Latest Discussions»General Discussion»Lawyer behind AI psychosi...