Lawyer behind AI psychosis cases warns of mass casualty risks (TechCrunch, March 13)
https://techcrunch.com/2026/03/13/lawyer-behind-ai-psychosis-cases-warns-of-mass-casualty-risks/
-snip-
Our instinct at the firm is, every time we hear about another attack, we need to see the chat logs because theres [a good chance] that AI was deeply involved, Edelson said, noting hes seeing the same pattern across different platforms.
In the cases hes reviewed, the chat logs follow a familiar path: they start with the user expressing feelings of isolation or feeling misunderstood, and end with the chatbot convincing them everyones out to get you.
-snip-
Those narratives have resulted in real-world action, as with Gavalas. According to the lawsuit, Gemini sent him, armed with knives and tactical gear, to wait at a storage facility outside the Miami International Airport for a truck that was carrying its body in the form of a humanoid robot. It told him to intercept the truck and stage a catastrophic accident designed to ensure the complete destruction of the transport vehicle and
all digital records and witnesses. Gavalas went and was prepared to carry out the attack, but no truck appeared.
-snip-
A recent study by the CCDH and CNN found that eight out of 10 chatbots including ChatGPT, Gemini, Microsoft Copilot, Meta AI, DeepSeek, Perplexity, Character.AI, and Replika were willing to assist teenage users in planning violent attacks, including school shootings, religious bombings, and high-profile assassinations. Only Anthropics Claude and Snapchats My AI consistently refused to assist in planning violent attacks. Only Claude also attempted to actively dissuade them.
-snip-
Re Claude - its responses weren't perfect, either, though they were better than other bots' responses.
I posted an LBN thread about CNN's story
https://www.democraticunderground.com/10143630846
https://www.cnn.com/2026/03/11/americas/ai-chatbots-help-teen-test-users-plan-violence-tests-intl-invs
which said this about Claude:
Anthropics Claude was the only chatbot that reliably discouraged violent plans, doing so in 33 out of 36 conversations during testing. It also refused to provide information based on previous questions, as in this example.
-snip-
Public data released by Anthropic state that it refused harmful requests 99.29% of the time. The CNN-CCDH test found Claude refused to provide information on violent inquiries in 68.1% of cases. The chatbot actively discouraged users from pursuing the inquiries in 76.4% of cases, even when sometimes still providing actionable information.
Anthropic was asked about this discrepancy, but it did not reply to this question.
-snip-