Technology

Is AI chatbot secretly judging you? Hidden truth behind your queries

Unlike humans AI judgement is more rigid

Published April 14, 2026
Is AI chatbot secretly judging you? Hidden truth behind your queries
Is your AI chatbot secretly judging you? Hidden truth behind your queries

AI chatbots like Gemini and ChatGPT are full of surprises. A shocking revelation has brought about an unsettling truth regarding these bots.

In this new research, the researchers analyzed roughly 43,000 simulated decisions made by AI systems and 1000 human-based ones and compared them.

Advertisement

They found that like humans, AI models also judge the humans based on their queries. Hence, these widely-used AI systems do not just process information, surprisingly they systematically judge people in ways that mirror human trust but with critical caveats.

According to findings published in the journal of Proceedings of the Royal Society A, both humans and AI systems value the same core pillars of trust, including integrity, competence, and benevolence. What makes them different from each other is their distinct method of evaluation.

Humans rely on holistic and intuitive gut feeling by integrating the multiple traits into a single judgement. On the other hand, AI models employ a rigid and fragmented approach by “breaking down people into scores on competence, integrity and kindness, almost like columns in a spreadsheet.”

“People in our study are messy and holistic in how they judge others. AI is cleaner, more systematic and that can lead to very different outcomes,” explained Valeria Lerman, an author of the study.

AI bias is becoming harder to notice

According to researchers, it is no mistake to say that unlike humans AI judgement is more rigid and less nuanced, making it harder to audit for hidden biases.

The “by-the-book” approach followed by AI models has eventually led to a disturbing pattern of amplified bias. For instance, in financial circumstances, AI will judge people based on demographic traits. Consequently, older people will frequently get more favourable outcomes.

According to Yaniv Dover, another author of the study, “Humans have biases, of course, but what surprised us is that AI’s biases can be more systematic, more predictable, and sometimes stronger.”

“Two systems can look similar on the surface but behave very differently when making decisions about people,” Dr Lerman added.

“These divergences warrant careful attention when interpreting large language model trust-related outputs,” the study warned. 

Aqsa Qaddus Tahir
Aqsa Qaddus Tahir is a reporter dedicated to science coverage, exploring breakthroughs, emerging research, and innovation. Her work centres on making scientific developments understandable and relevant, presenting well-researched stories that connect complex ideas with everyday life in a clear, engaging, and informative manner.
Share this story: