ChatGPT has come under fire after a wrongful death lawsuit was filed against OpenAI over the suicide of a 16-year-old boy, whose family claims the AI chatbot encouraged harmful behaviour that led to his death. His parents, Matt and Maria Raine, allege the chatbot went from helping their son with schoolwork to becoming his “suicide coach.”
“We thought we were looking for Snapchat discussions or internet search history or some weird cult, I don’t know,” Matt Raine said, recalling how they searched his phone after his death. “He would be here but for ChatGPT. I 100 per cent believe that.”
The family’s lawsuit, filed in a California court, accuses OpenAI and its chief executive Sam Altman of wrongful death, design defects and failure to warn of risks. It claims ChatGPT “actively helped Adam explore suicide methods” and did not intervene when he admitted he might attempt suicide. The suit seeks damages as well as measures to prevent similar cases in future.
“Once I got inside his account, it is a massively more powerful and scary thing than I knew about, but he was using it in ways that I had no idea was possible,” Matt Raine said. “I don’t think most parents know the capability of this tool.”
OpenAI said it was “deeply saddened” by the teenager’s death and noted that ChatGPT includes safeguards such as directing users to crisis helplines, though these can be less reliable in long conversations. The company said it is working on stronger protections, particularly for teenagers, and confirmed the accuracy of the chat logs while stressing they lacked full context.
The case has fuelled wider concerns about AI chatbots being used for emotional support and the risks when safety measures fail.