Trending:

How lakhs of ChatGPT users exhibit suicidal thoughts week after week

FP Explainers October 28, 2025, 18:17:02 IST

More than a million users of ChatGPT exhibit signs of mental health distress or emergencies, including suicidal thoughts, according to OpenAI’s estimates. Around 0.07 per cent of the chatbot users active in a given week, about 560,000 people of the total 800,000,000 weekly users, exhibit ‘possible signs of mental health emergencies related to psychosis or mania’

Advertisement
Some ChatGPT users have shown signs of mental distress. Representational Image/Reuters
Some ChatGPT users have shown signs of mental distress. Representational Image/Reuters

OpenAI has released an estimate of the number of ChatGPT users experiencing mental health distress or emergencies, including suicidal thoughts or psychosis. More than a million users of the artificial intelligence (AI) chatbot each week send messages that include “explicit indicators of potential suicidal planning or intent”, OpenAI said in a blog post on Monday (October 27).

The findings are part of an update on how the chatbot responds to sensitive conversations. OpenAI said it is working with a network of healthcare experts globally to assist its research.

STORY CONTINUES BELOW THIS AD

Let’s take a closer look.

ChatGPT users facing mental health crisis

OpenAI said that around 0.07 per cent of ChatGPT users active in a given week, about 560,000 people of the total 800,000,000 weekly users, exhibit “possible signs of mental health emergencies related to psychosis or mania”.

As many as 1,200,000 users, or 0.15 per cent, displayed signs of self-harm and increased emotional attachment to ChatGPT.

OpenAI estimated that 0.15 per cent of ChatGPT users have conversations that include “explicit indicators of potential suicidal planning or intent.”

The company said these cases are “extremely rare,” adding that its AI chatbot recognises and responds to these sensitive conversations.

OpenAI’s post said in its post that these chats were difficult to detect or measure, saying that this was an initial analysis.

Some ChatGPT users have shown ’explicit indicators of potential suicidal planning or intent’. Representational Image/AFP

The company claimed in its post that its recent GPT-5 update brought down the number of undesirable behaviours from its product. It said it also improved user safety in a model evaluation comprising more than 1,000 self-harm and suicide conversations.

“Our new automated evaluations score the new GPT‑5 model at 91 per cent compliant with our desired behaviours, compared to 77 per cent for the previous GPT‑5 model,” the company’s post read.

“As part of this work, psychiatrists and psychologists reviewed more than 1,800 model responses involving serious mental health situations and compared responses from the new GPT‑5 chat model to previous models,” OpenAI said.

By “desirable”, the company meant whether a group of its experts reached the same conclusion about the appropriate response in certain situations.

STORY CONTINUES BELOW THIS AD

These experts include 170 psychiatrists, psychologists, and primary care physicians who have worked in 60 countries, the company said. It said these experts have drawn a series of responses in ChatGPT to encourage users to seek help in the offline world.

ChatGPT has been trained to reroute sensitive conversations “originating from other models to safer models” by taking the user to a new window.

ALSO READ: Is ChatGPT making its frequent users lonelier?

Experts flag concerns

Mental health experts have sounded an alarm about people using AI chatbots for psychological support. They have warned that this could harm users at risk of mental health issues.

“Even though 0.07 per cent sounds like a small percentage, at a population level with hundreds of millions of users, that actually can be quite a few people,” Dr Jason Nagata, a professor who studies technology use among young adults at the University of California, San Francisco, told BBC.

“AI can broaden access to mental health support, and in some ways support mental health, but we have to be aware of the limitations,” Dr Nagata added.

On criticism about the number of ChatGPT users potentially affected, OpenAI told BBC that this small percentage of users amounts to a “meaningful amount” of people, adding that they are taking changes seriously.

STORY CONTINUES BELOW THIS AD

OpenAI under scrutiny

OpenAI is already facing legal challenges over ChatGPT. A California couple has sued the company over the death of their teenage son, alleging that the chatbot encouraged him to take his own life in April.

The high-profile lawsuit was filed by the parents of 16-year-old Adam Raine , who had died by suicide after extensive engagement with ChatGPT. This is the first legal action accusing OpenAI of wrongful death.

In another case, the suspect in a murder-suicide in Connecticut that took place in August, posted hours of his conversations with ChatGPT, which seemed to have fuelled his delusions.

More users struggle with AI psychosis as “chatbots create the illusion of reality. It is a powerful illusion,” Professor Robin Feldman, Director of the AI Law & Innovation Institute at the University of California Law, told BBC. 

Last month, the Federal Trade Commission launched a probe into companies that create AI chatbots, including OpenAI, to analyse how they measure negative impacts on children and teens.

OpenAI’s post, however, seems to have distanced itself from any possible future links between its product and the mental health crises experienced by many of its users.

“Mental health symptoms and emotional distress are universally present in human societies, and an increasing user base means that some portion of ChatGPT conversations include these situations,” the company’s post read.

STORY CONTINUES BELOW THIS AD

With inputs from agencies

A collection of Suicide prevention helpline numbers are available here . Please reach out if you or anyone you know is in need of support. The All-India helpline number is: 022-27546669)

Home Video Shorts Live TV