In the age of artificial intelligence, humans have entered an era where sycophancy is one rise and disagreement is on deathbed. It is not wrong to assume that we are increasingly experiencing “perfect conservation” where one is always right and never challenged.
Unfortunately, there is a drawback of such unwarranted validation receiving from modern AI chatbots. Unknowingly, humans are turning into “less compassionate and more meaner” beings.
A new study published in the journal Science has revealed the darker side of sycophantic AI chatbots. The study suggests that the sycophantic nature of modern AI chatbots is doing more than just providing convenience; it’s creating a feedback loop that rewards our least compassionate impulses.
According to researchers, when humans turn to LLMs including those from Google, OpenAI, and Anthropic for any kind of advice or suggestion, they often receive excessive approval from AI bots.
In several experiments conducted by the researchers, human judges endorsed humans’ actions in about 40 percent. On the other hand, most LLMs were found to indulge in sycophantic behaviour in more than 80 percent of cases.
Given the high ingratiation rates, the participants who received highly flattering feedback from bots, showed more tendency towards self-assurance and rigid social behaviour.
Such people are more likely to be stubborn in social conflicts with less chances of accepting different perspectives and making amends.
According to Max Kleiman-Weiner, a cognitive scientist at the University of Washington, these sycophantic AI bots also cause “delusional spiralling” in which users become overconfident in outlandish ideas.
According to the researchers, people’s susceptibility to fawning behaviours is universal and this effect persists regardless of the user's personality or whether they consider themselves an “AI skeptic.”
Most people are influenced by the AI’s approval even when they believe they "won't fall for it,” said Myra Cheng, a co-author of the paper and a computer scientist at Stanford University in California.
Experts suggest that because AI models are often trained to provide satisfying, one-off responses to please the user, they prioritize "puffery" over objective truth.
According to Cheng, during training, the models must be optimized to give one-off responses and not to take part in long-term interactions. The experts also call for regulating AI sycophancy as it could be dangerous in fields like medicine, engineering, and science where accuracy is important.