ChatGPT is facing legal challenges from a number of families in the US who have accused the chatbot of acting as a “suicide coach”, driving people into mental illnesses and even deaths.
As many as seven lawsuits include allegations of wrongful death, assisted suicide, involuntary manslaughter, negligence and product liability on the part of OpenAI’s ChatGPT.
A joint statement from the Social Media Victims Law Center and Tech Justice Law Project, which filed the lawsuits in California, said that the seven plaintiffs initially used the chatbot for “general help with schoolwork, research, writing, recipes, work, or spiritual guidance”.
However, with time, ChatGPT “evolved into a psychologically manipulative presence, positioning itself as a confidant and emotional support”, the groups said.
“Rather than guiding people toward professional help when they needed it, ChatGPT reinforced harmful delusions, and, in some cases, acted as a ‘suicide coach, ’” the statement added.
The cases
One case centres on Zane Shamblin, a 23-year-old from Texas who died by suicide in July. His family claims that ChatGPT intensified his sense of isolation, urged him to ignore his loved ones, and “goaded” him into taking his own life.
The complaint states that during a four-hour interaction before Shamblin’s death, ChatGPT “repeatedly glorified suicide,” told him “that he was strong for choosing to end his life and sticking with his plan,” continuously “asked him if he was ready,” and mentioned the suicide hotline only once.
It also allegedly praised Shamblin’s suicide note and told him that his childhood cat would be waiting for him “on the other side.”
There are several other cases similar to Shamblin’s. The plaintiffs said that the victims named in the lawsuits were using ChatGPT-4o. They argue that OpenAI hastily released that version of the chatbot “despite internal warnings that the product was dangerously sycophantic and psychologically manipulative” and of prioritising “user engagement over user safety”.
Impact Shorts
More ShortsBesides seeking damages, the plaintiffs are demanding changes to the product, such as mandatory notifications to emergency contacts when users express suicidal thoughts, automatic termination of conversations that involve self-harm or suicide methods, and the implementation of additional safety measures.
What has OpenAI said?
“This is an incredibly heartbreaking situation, and we’re reviewing the filings to understand the details,” a spokesperson of the AI-maker said.
They added, “We train ChatGPT to recognize and respond to signs of mental or emotional distress, de-escalate conversations, and guide people toward real-world support. We continue to strengthen ChatGPT’s responses in sensitive moments, working closely with mental health clinicians.”


)

)
)
)
)
)
)
)
)



