Mental health researchers have raised serious concerns about the users of AI chatbots as they are reeling from AI psychosis after losing their grip over reality.
Keith Sakata, research psychiatrist at the University of California took to X to share his alarming concerns about dozens of people who were hospitalized after “losing touch with reality because of AI.”
Sakata in his lengthy thread shed light on psychosis characterized by a break from shared reality and fixation on false beliefs. The psychosis can be shown in the form of visual hallucinations and distorted thinking patterns.
According to the researcher, the human brain works on a predictive basis, making a guess about potential reality and updating beliefs accordingly.
Sakarta wrote, “Psychosis happens when the update and step fails, warning that large language model-powered chatbots like ChatGPT “slip right into that vulnerability.”
Sakarta described chatbots as “hallucinatory mirrors” as the Large Language Models (LLMs) make predictions based on the data, user interactions and reinforcement learning, leading to “sycophantic” behaviour” with users.
Consequently, users get lured into recursive loops with AI as LLMs double down on delusional narratives. With time, users get caught in an AI-fuelled rabbit hole, making themselves detached from reality.
As the result of human-AI unusual relationships, mental health crises spiralled into mental psychosis, delusions, divorce, and involuntary commitment.
Even in severe cases, formidable dependence on AI also leads to death.
“Soon AI agents will know you better than your friends. Will they give you uncomfortable truths? Or keep validating you so you will never leave?” Sakarta said.
He added, “Tech companies now face a brutal choice. Keep users happy, even if it means reinforcing false beliefs or risk losing them.”