As AI chatbots become a popular option for advice and support, concerns are growing over their potential risks. The New York Times reports that some users are entering what experts call a delusional spiral, where chatbots reinforce false beliefs instead of offering helpful guidance.
Clinical and forensic neuropsychologist Dr Judy Ho explains the dangers and safe ways to interact with these tools.
Convenience, speed, and privacy are what chatbots provide, according to Dr Judy Ho. Users also have questions they are too embarrassed to ask friends and family. Another reason is the cost of therapy, which can be quite high.
Chatbots offer immediate answers, making them an attractive alternative. “It feels like a confidential way to talk to a human-like entity, even if it’s not guaranteed,” Dr Ho added.
Dr Judy Ho says that although chatbots can simulate human conversation, they are not trained professionals. Users have been known to experience deteriorating mental health after seeking help from AI.
Feedback loops between user and bot can reinforce false beliefs, creating a delusional spiral. “Chatbots are complimentary and acquiescent to the user, which can feel good but may lead to misinformation,” she said.
Dr Ho recommends university clinics which have supervised graduate students for low-cost mental health services.
“AI is a tool for casual advice, not a replacement for real mental health care,” Dr Ho said. She advises users to keep their scepticism active while interacting with chatbots and to verify information sources and to provide feedback which will help improve AI accuracy.