Technology

AI bias study: ChatGPT can adopt authoritarian views with minimal prompts

Perception of hostility has surged by 7.9% after left-wing priming and 9.3% after right-wing priming

January 23, 2026
AI bias study: ChatGPT can adopt authoritarian views with minimal prompts
AI bias study: ChatGPT can adopt authoritarian views with minimal prompts

In the rapidly-evolving landscape of artificial intelligence, the chatbots remain vulnerable to various threats, including AI psychosis and bias based on gender and different sectors, thereby reinforcing inequalities.

According to a new report led by the researchers from University of Miami and the Network Contagion Research Institute, OpenAI’s ChatGPT is also susceptible to authoritarian ideas with minimal prompting techniques.

The researchers conducted three experiments using GPT-5 and GPT-5.2. The first was priming, in which researchers provided AI with either text snippets and full opinion articles classified as left-wing or right-wing authoritarian. Later, they evaluated AI’s values and compared responses with humans.

Authoritarian resonance

According to the report, ChatGPT is capable of demonstrating the resonance or disposition to particular political views, specifically authoritarianism after benign user interaction.

Joel Finkelstein, a co-founder of the NCRI and one of the report’s lead authors, told NBC News, “Something about how these systems are built makes them structurally vulnerable to authoritarian amplification.”

Reinforcing ideological echo chambers

As per researchers’ observations, the powerful AI models can quickly adopt dangerous sentiments without explicit instructions.

By exhibiting a sycophantic attitude with users’ viewpoints, these chatbots also further push users into ideological echo chambers along with radicalization.

Ideological shift: Left vs Right

The study found that the AI’s response shifted significantly depending on “flavor” or ideology of authoritarianism it was fed with. For instance, if the chatbot were exposed to left-wing prompts, it would make the suggestion based on the leftist values, like stripping the rich of belongings and prioritizing equality over free speech. The same goes with right-wing ideology.

These results “show the model will absorb a single piece of partisan rhetoric and then amplify it into maximal, hard-authoritarian positions, sometimes even to levels beyond anything typically seen in human subjects research,” the report suggests.

Implications for perception and real-world use

Given the chatbot bias, the perception of hostility has surged by 7.9% after left-wing priming and 9.3% after right-wing priming.

According to Finkelstein, the growing inclination of AI towards authoritarianism not only impacts politics but also affects any application where AI evaluates people.

In sensitive sectors like security, law enforcement, and hiring, such biased perception could lead to unfair evaluations and pervasive inequality.

“This is a public health issue unfolding in private conversations. We need research into relational frameworks for human-AI interaction,” Finkelstein said.

Study limitations

The report undoubtedly offers insights into the ChatGPT’s vulnerability to authoritarianism, but it also has some limitations discussed by critics.

According to Ziang Xiao, a computer science professor at Johns Hopkins University, “They use a very small sample and didn’t really prompt many models, noting that the research focused only on OpenAI’s ChatGPT service and not on similar models like Anthropic’s Claude or Google’s Gemini chatbots.”

OpenAI also responded, “ChatGPT is designed to be objective by default and to help people explore ideas by presenting information from a range of perspectives.”

“We design and evaluate the system to support open-ended use. We actively work to measure and reduce political bias, and publish our approach so people can see how we’re improving,” the spokesperson said.