Google AI Overviews and mental health: Why experts say It’s ‘very dangerous’
Weatherley and her team conducted a 20-minute test in which Mind experts came across several 'harmful inaccuracies' within minutes
The mental health charity Mind has launched a year-long commission to investigate how AI could be reliable in terms of mental health.
And the findings are quite troubling. The recent study follows a Guardian investigation into Google’s AI Overviews, which were found to provide “very dangerous” and factually incorrect advice to users.
According to Rosie Weatherley, who is information content manager at the largest mental health charity in England and Wales, Google has developed a reliable system of credible search results. Unfortunately these results come with irresponsible AI summary.
These AI-generated summaries present sensitive health topics as definitive facts by omitting vital context.
“AI Overviews replaced that richness with a clinical-sounding summary that gives an illusion of definitiveness. It’s a very seductive swap, but not a responsible one,” she said.
Weatherley and her team conducted a 20-minute test in which Mind experts came across several “harmful inaccuracies” within minutes.
During the test, Google AI Overviews suggested dangerous advice, such as “starvation is healthy,” while validating users’ delusions.
“In each of these examples we are seeing how AI Overviews are flattening information about highly sensitive and nuanced areas into neat answers. And when you take out important context and nuance and present it in the way AI Overviews do, almost anything can seem plausible,” Weatherley said.
According to Weatherley, this process is going to harm those people who are likely to be in some level of distress, because these people may be less equipped to fact-check or question a confident AI response.
It is no mistake to say that AI possesses enormous potential to improve lives, but we cannot ignore its growing risks.
In the age of disinformation, what people really need is an “access to constructive, empathetic, careful and nuanced information at all times.”
-
What happens if ChatGPT gains access to your financial accounts? Experts are alarmed
-
Anthropic seeks legal pause on Pentagon supply-chain risk decision: Here’s why
-
'AI washing' or real shift? Atlassian cuts 1,600 jobs in latest tech shake-up
-
Experts predict AI will trigger biggest shift in mathematics history
-
China’s cyber agency raises concerns over OpenClaw AI
-
WhatsApp plans major change for younger users
-
Musk unveils Tesla, xAI joint project ‘Macrohard’ amid advanced AI push
-
Nvidia secures $2 billion deal with AI cloud provider Nebius
