Is AI chatbot secretly judging you? Hidden truth behind your queries
Unlike humans AI judgement is more rigid
AI chatbots like Gemini and ChatGPT are full of surprises. A shocking revelation has brought about an unsettling truth regarding these bots.
In this new research, the researchers analyzed roughly 43,000 simulated decisions made by AI systems and 1000 human-based ones and compared them.
They found that like humans, AI models also judge the humans based on their queries. Hence, these widely-used AI systems do not just process information, surprisingly they systematically judge people in ways that mirror human trust but with critical caveats.
According to findings published in the journal of Proceedings of the Royal Society A, both humans and AI systems value the same core pillars of trust, including integrity, competence, and benevolence. What makes them different from each other is their distinct method of evaluation.
Humans rely on holistic and intuitive gut feeling by integrating the multiple traits into a single judgement. On the other hand, AI models employ a rigid and fragmented approach by “breaking down people into scores on competence, integrity and kindness, almost like columns in a spreadsheet.”
“People in our study are messy and holistic in how they judge others. AI is cleaner, more systematic and that can lead to very different outcomes,” explained Valeria Lerman, an author of the study.
AI bias is becoming harder to notice
According to researchers, it is no mistake to say that unlike humans AI judgement is more rigid and less nuanced, making it harder to audit for hidden biases.
The “by-the-book” approach followed by AI models has eventually led to a disturbing pattern of amplified bias. For instance, in financial circumstances, AI will judge people based on demographic traits. Consequently, older people will frequently get more favourable outcomes.
According to Yaniv Dover, another author of the study, “Humans have biases, of course, but what surprised us is that AI’s biases can be more systematic, more predictable, and sometimes stronger.”
“Two systems can look similar on the surface but behave very differently when making decisions about people,” Dr Lerman added.
“These divergences warrant careful attention when interpreting large language model trust-related outputs,” the study warned.
-
Alibaba shares fall after sharp decline in core profitability
-
Adobe Premiere is finally coming to Android this summer, Google confirms
-
Meta Connect 2026: Mark Zukerberg hints at major wearables
-
Claude Code's creator runs ‘few thousand’ AI agents overnight on his phone
-
China leads new healthcare alliance to expand its dominance across Asia-Pacific
-
How AI is making internet sound same, study reveals
-
Did Elon wanted majority of OpenAI? Altman's testimony reveals new details
-
Samsung One UI 9 beta: Second wave rollout begins May 26
