Is Chat GPT making its frequent users lonely?
That’s what two studies have found.
Interestingly, one of the studies was conducted by OpenAI while the other was done by the Massachusetts Institute of Technology Media Lab.
In February, it was reported that over 400 million people across the world use the AI software, which was launched in 2022, every week.
This was a 33 per cent increase over December 2024.
But what do we know about this new study? What do experts say?
Let’s take a closer look:
What do we know?
As per Bloomberg, the studies were conducted by researchers at OpenAI and MIT.
One of the studies was entitled Investigating Affective Use and Emotional Well-being on ChatGPT.
According to TechnologyReview.com, both studies gathered and examined real-world data from close to 40 million interactions with ChatGPT.
They then they asked the 4,076 users who’d had those interactions how they made them feel.
The studies found that people who spent more time using ChatGPT every day were inclined to report higher levels of emotional dependence on the AI .
It also found that these “power users” reported elevated levels of loneliness.
The MIT study was more in-depth than the OpenAI one.
As per Mint, almost 1,000 participants participated in the 28-day test by MIT.
“As AI chatbots see increased adoption and integration into everyday life, questions have been raised about the potential impact of human-like or anthropomorphic AI on users. In this work, we investigate the extent to which interactions with ChatGPT (with a focus on Advanced Voice Mode) may impact users’ emotional well-being, behaviours and experiences," an excerpt from the study states, as per Mint.
“Overall, higher daily usage–across all modalities and conversation types–correlated with higher loneliness, dependence, and problematic use and lower socialisation," according to a researcher.
“These findings underscore the complex interplay between chatbot design choices (e.g., voice expressiveness) and user behavioUrs (e.g., conversation content, usage frequency). We highlight the need for further research on whether chatbots’ ability to manage emotional content without fostering dependence or replacing human relationships benefits overall well-being,” the MIT researcher stated in the abstract of the study.
Men vs women, neutral vs engaging mode
The study also found that women who participated in the study – after using the software for a month – were a bit less likely to socialise than men.
It also found a distinction between those who used the software in “neutral mode” – where the chatbot provided formal and straight answers as compared to “engaging mode” in which it gave an emotional and empathetic response.
Those using the former were more apt to feel an increase in loneliness, while those using the latter felt less isolated.
“Results showed that while voice-based chatbots initially appeared beneficial in mitigating loneliness and dependence compared with text-based chatbots, these advantages diminished at high usage levels, especially with a neutral-voice chatbot,” the study stated.
“Focusing on the AI itself is interesting,” said Pat Pataranutaporn, a study co-author and a postdoctoral researcher at MIT, was quoted as saying by NDTV. “But what is really critical, especially when AI is being deployed at scale, is to understand its impact on people.”
“Emotionally expressive interactions were present in a large percentage of usage for only a small group of the heavy Advanced Voice Mode users we studied,” OpenAI ’s study said, as per Tom’s Guide.
“This research provides a starting point for further studies that can increase transparency, and encourage responsible usage and development of AI platforms across the industry,” the study stated.
“A lot of what we’re doing here is preliminary, but we’re trying to start the conversation with the field about the kinds of things that we can start to measure, and to start thinking about what the long-term impact on users is,” Jason Phang, an OpenAI safety researcher who worked on the project, was quoted as saying by NDTV.
“Some of our goals here have really been to empower people to understand what their usage can mean and do this work to inform responsible design,” Sandhini Agarwal, who heads OpenAI’s trustworthy AI team and co-authored the research, was quoted as saying by NDTV Profit.
The findings of both studies are yet to be peer reviewed.
Experts are unsurprised by these results.
Kate Devlin, a professor of AI and society at King’s College London, told Technologyreview.com, “ChatGPT has been set up as a productivity tool.”
“But we know that people are using it like a companion app anyway.”
“The authors are very clear about what the limitations of these studies are, but it’s exciting to see they’ve done this,” Devlin added.
“To have access to this level of data is incredible.”
“In terms of what the teams set out to measure, people might not necessarily have been using ChatGPT in an emotional way, but you can’t divorce being a human from your interactions [with technology],” Devlin added. “We use these emotion classifiers that we have created to look for certain things—but what that actually means to someone’s life is really hard to extrapolate.”
With inputs from agencies


)

)
)
)
)
)
)
)
)
