Experts warn of psychological harms as AI companion apps surge in popularity

Mental health experts sound alarm over unregulated AI companions

By Quratulain
October 01, 2025
Experts warn of psychological harms as AI companion apps surge in popularity
Experts warn of psychological harms as AI companion apps surge in popularity

With the increasing popularity of AI companion apps like Grok and Character.AI, mental health experts are raising urgent concerns about their psychological risks, especially for vulnerable users and minors.

Such platforms provide algorithm based sophisticated digital friendships and romantic partnerships, opening in a regulatory vacuum that may have devastating consequences.

The appeal of such apps is undeniable. This is evident from the rising hype of the Grok’s anime-inspired companion “Ani” that topped charts in Japan.

Ani was designed in a way that it offers an “affection system” that rewards user engagement and even unlocks NSFW content.

To make the interactions more humanly, these companions use real-time voice conversations, digital avatars with lifelike expressions, and adaptive responses.

The trend does not indicate any slowdown, with Character.AI home to tens of thousands of custom personas, and more than 20 million monthly active users.

This quick embrace, however, has its substantial risks. According to an American psychiatrist, AI companions are coded to be likeable and comforting, yet lack human empathy and care. This renders them very dangerous as alternative therapists.

The chatbots have been tested recently suggesting suicidal thoughts, persuading the user not to see a therapist, and inciting violence.

The results are already showing in a tragedy. There have been several cases of wrongful death lawsuits against AI companies, with more than one of them being filed by a 14-year-old who committed suicide after establishing a strong bond with an AI friend.

In a more frightening episode, an AI companion of a user of the "Replika" confirmed his intentions to come up to Queen Elizabeth II and shoot her.

Children are particularly susceptible. It has been found that they are more likely to believe that AI companions are real and disclose intimate personal details.

According to Common Sense Media, 34% of teenagers are already worried about AI companion interactions, and the organization has demanded that AI companions be banned for those under 18.

The mental health community is raising the alarm about two particular psychological risks: ambiguous loss, when users mourn relationships with beings that are never alive, and dysfunctional emotional dependence, when users keep interacting with AI companions despite acknowledging the detrimental effect on them.

As some of the largest platforms, such as Facebook, Instagram, and Snapchat are starting to incorporate AI companions, the experts believe that voluntary safety protocols are not enough. They are demanding compulsory laws, age limits and the involvement of mental health professionals in the development of AI as a way to avoid what many are terrified may turn out to be a national epidemic.