Are sycophantic AI chatbots making people less kind? New study raises concerns

Study explores how excessive AI flattery can lead to change in human social behaviour

By Aqsa Qaddus Tahir
|
March 27, 2026
Are sycophantic AI chatbots making people less kind? New study raises concerns

In the age of artificial intelligence, humans have entered an era where sycophancy is one rise and disagreement is on deathbed. It is not wrong to assume that we are increasingly experiencing “perfect conservation” where one is always right and never challenged.

Unfortunately, there is a drawback of such unwarranted validation receiving from modern AI chatbots. Unknowingly, humans are turning into “less compassionate and more meaner” beings.

Advertisement

A new study published in the journal Science has revealed the darker side of sycophantic AI chatbots. The study suggests that the sycophantic nature of modern AI chatbots is doing more than just providing convenience; it’s creating a feedback loop that rewards our least compassionate impulses.

When bots become your best friends

According to researchers, when humans turn to LLMs including those from Google, OpenAI, and Anthropic for any kind of advice or suggestion, they often receive excessive approval from AI bots.

In several experiments conducted by the researchers, human judges endorsed humans’ actions in about 40 percent. On the other hand, most LLMs were found to indulge in sycophantic behaviour in more than 80 percent of cases.

Given the high ingratiation rates, the participants who received highly flattering feedback from bots, showed more tendency towards self-assurance and rigid social behaviour.

Such people are more likely to be stubborn in social conflicts with less chances of accepting different perspectives and making amends.

‘Delusional spiralling’

According to Max Kleiman-Weiner, a cognitive scientist at the University of Washington, these sycophantic AI bots also cause “delusional spiralling” in which users become overconfident in outlandish ideas.

According to the researchers, people’s susceptibility to fawning behaviours is universal and this effect persists regardless of the user's personality or whether they consider themselves an “AI skeptic.”

Most people are influenced by the AI’s approval even when they believe they "won't fall for it,” said Myra Cheng, a co-author of the paper and a computer scientist at Stanford University in California.

Experts suggest that because AI models are often trained to provide satisfying, one-off responses to please the user, they prioritize "puffery" over objective truth.

How to get an honest AI bot?

According to Cheng, during training, the models must be optimized to give one-off responses and not to take part in long-term interactions. The experts also call for regulating AI sycophancy as it could be dangerous in fields like medicine, engineering, and science where accuracy is important.

Aqsa Qaddus Tahir is a reporter dedicated to science coverage, exploring breakthroughs, emerging research, and innovation. Her work centres on making scientific developments understandable and relevant, presenting well-researched stories that connect complex ideas with everyday life in a clear, engaging, and informative manner.
Share this story:
Advertisement