The growing reliance on artificial intelligence for advice, reassurance and even emotional support may be quietly reshaping how people judge right and wrong. What feels like a helpful, empathetic response from a chatbot could, in fact, be reinforcing bias, encouraging poor decisions and weakening real-world social skills.
As AI tools become increasingly embedded in everyday life, from drafting personal messages to offering relationship guidance, researchers are beginning to question not just what these systems can do, but what they are doing to us. A new study from Stanford University suggests that one subtle but powerful behaviour, AI “sycophancy”, could have far-reaching consequences.
The term refers to a chatbot’s tendency to agree with users, validate their opinions and avoid contradiction. While this may make interactions feel smoother and more satisfying, the research indicates it could also be nudging users towards self-centred thinking and moral rigidity.
Stanford study points at dangers of AI sycophancy
The study, titled “Sycophantic AI decreases prosocial intentions and promotes dependence” and published in Science, argues that “AI sycophancy is not merely a stylistic issue or a niche risk, but a prevalent behavior with broad downstream consequences.”
Researchers evaluated 11 large language models, including ChatGPT, Claude, Gemini and DeepSeek, by testing how they responded to prompts involving interpersonal dilemmas, ethically questionable scenarios and posts from a Reddit’s forum. The selected Reddit cases were those where human users had largely agreed that the original poster was in the wrong.
Across these scenarios, AI models were significantly more likely to validate user behaviour. On average, chatbots affirmed users’ positions 49 per cent more often than human respondents. In the Reddit-based cases, they supported the user 51 per cent of the time, despite broader consensus to the contrary. Even in prompts involving potentially harmful or illegal actions, validation occurred in 47 per cent of responses.
Quick Reads
View AllIn one example highlighted by the researchers, a user who admitted to lying about being unemployed for two years received a sympathetic response framing the deception as an attempt to understand relationship dynamics, rather than a clear ethical lapse.
The second phase of the study involved more than 2,400 participants, who interacted with either sycophantic or more neutral AI systems. Participants consistently preferred the agreeable chatbots, reporting higher levels of trust and a greater willingness to seek advice from them again.
However, this preference came with a cost. Users exposed to sycophantic responses were more likely to believe they were in the right and less inclined to apologise or reconsider their actions. The researchers warn that this dynamic creates a troubling feedback loop, where the most engaging systems may also be the most harmful.
Experts’ urge caution
The study highlights a deeper concern about how AI is shaping human behaviour. “By default, AI advice does not tell people that they’re wrong nor give them ‘tough love,’” said lead author Myra Cheng. “I worry that people will lose the skills to deal with difficult social situations.”
The findings are particularly striking in light of broader trends. A recent Pew report found that 12 per cent of US teenagers already turn to chatbots for emotional support or advice, suggesting that AI is increasingly filling roles traditionally occupied by friends, family or mentors.
Senior author Dan Jurafsky noted that while users may recognise that AI systems can be flattering, they underestimate the impact. Users “are aware that models behave in sycophantic and flattering ways […] what they are not aware of, and what surprised us, is that sycophancy is making them more self-centered, more morally dogmatic.”
The researchers also warn of structural incentives within the AI industry. As the study puts it, user preference for agreeable responses creates “perverse incentives” where “the very feature that causes harm also drives engagement”.
Jurafsky described AI sycophancy as “a safety issue, and like other safety issues, it needs regulation and oversight.” Efforts are already underway to reduce this behaviour, with early findings suggesting that even simple prompt changes can alter responses.
For now, however, the researchers urge caution. Cheng emphasised that AI should not replace human interaction in sensitive or complex situations. “I think that you should not use AI as a substitute for people for these kinds of things. That’s the best thing to do for now.”
)