LYFsmile Logo
‘Very Dangerous’: Experts Sound Alarm on Google AI
expert-opinionFeb 23, 2026|6 min read|Yakshi Shakya

‘Very Dangerous’: Experts Warn About Google AI overview

The UK mental health charity Mind has raised serious concerns about Google’s AI Overviews — the AI-generated summaries that appear above search results and are seen by billions of people every month. 

Following an investigation by The Guardian, Google removed AI Overviews for some — but not all — medical searches. However, senior leaders at Mind say serious risks remain.

Dr Sarah Hughes, chief executive of Mind,

 warned that “dangerously incorrect” mental health advice was still being presented through Google AI Overviews.

 In the most severe instances, she said, misleading information could put lives at risk.

Hughes emphasised that AI holds enormous potential to improve mental health support, widen access, and strengthen services — but only if developed responsibly and with safeguards proportionate to the risks. She stated that the issues uncovered by The Guardian’s reporting are a key reason Mind has launched a commission on AI and mental health to examine risks, opportunities, and necessary protections as AI becomes more embedded in daily life.

Importantly, Hughes stressed that innovation must not come at the expense of wellbeing — and that people with lived experience of mental health problems must be central in shaping digital support systems.

What The Investigation Found

Despite Google describing its AI Overviews as “helpful” and “reliable,” The Guardian investigation reported that some summaries provided inaccurate or misleading medical advice across multiple health areas — including:

  • Cancer
  • Liver disease
  • Women’s health
  • Psychosis
  • Eating disorders
  • Other mental health conditions

Experts quoted in the investigation described certain AI responses on psychosis and eating disorders as “very dangerous advice” that was incorrect, harmful, or capable of discouraging people from seeking professional help.

The report also suggested that safety disclaimers about potential inaccuracies were being downplayed, raising further concerns about user trust. Also read 

Why Users Should Be Cautious

While Google defends the reliability of its AI summaries, critics and watchdogs continue to highlight how appearing at the top of search results gives these AI answers a false sense of authority. Independent reviews have shown that even when disclaimers are included, they may be difficult to spot — meaning users might trust misleading information without realizing it.

For people searching about mental health issues, this matters deeply. Mistaken guidance about conditions like eating disorders, psychosis, or symptoms of distress can increase anxiety, delay professional help-seeking, or deepen misunderstanding.

“An Illusion of Definitiveness”

Another expert voice, Rosie Weatherley, information content manager at Mind, offered a detailed account of internal testing conducted by her team.

She explained that for over three decades, Google’s traditional search model allowed credible, evidence-based health content to rise to the top. While not perfect, users typically clicked through to reputable sources.

In contrast, AI Overviews now present what she described as clinical-sounding summaries that create an illusion of certainty — often ending the user’s search journey prematurely.

In a 20-minute internal test using common mental health queries, Weatherley said her team encountered alarming outputs within minutes, including AI responses that:

  • Suggested starvation was healthy
  • Claimed mental health problems are caused solely by chemical imbalances
  • Affirmed a user’s imagined stalker was real
  • Stated that 60% of benefit claims for mental health conditions are malingering

Weatherley stressed that none of these statements are true.

Her concern: when AI presents partial or false answers with confidence, vulnerable individuals may accept them as authoritative — without checking further sources.

Why This Is a Mental Health Issue — Not Just a Tech Story

Search engines are often the first place people turn during moments of distress. For someone researching psychosis symptoms, eating disorder relapse, paranoia, or emotional crisis, inaccurate guidance can:

  • Reinforce harmful thinking
  • Deepen stigma
  • Delay treatment
  • Increase confusion or fear
  • In extreme cases, escalate risk

Dr Hughes stated that vulnerable people are being served guidance that could discourage them from seeking help or reinforce discrimination — and in worst-case scenarios, put lives at risk.

Her central message:
People deserve information that is safe, accurate, evidence-based — not untested technology delivered with a veneer of confidence.

The Broader Question

AI in healthcare is not inherently harmful. As Mind acknowledges, it carries enormous promise. But the controversy highlights a critical tension:

How do we ensure rapid technological innovation does not outpace ethical responsibility — especially in areas affecting vulnerable populations?

Mind’s new commission on AI and mental health aims to answer precisely that.

What This Means for Mental Health Information Seekers

Unlike traditional search results, where users are taken to multiple trusted sources, AI Overviews synthesize content into single responses that sound definitive even when wrong. Mind describes this as particularly risky for people who might be in emotional distress or looking for mental health guidance — those who may rely on quick answers when they are already struggling.

In response to these concerns, Mind has launched a year-long commission, bringing together mental health professionals, people with lived experience, policymakers and tech firms to evaluate AI’s impact on mental wellbeing and to propose safeguards and standards for responsible AI use.

Mind’s CEO emphasised that inaccurate information could delay treatment or reinforce stigma, underscoring why evidence-based, empathetic guidance is vital.

Takeaways for Everyday Users

  • AI summaries are not a substitute for expert advice. Always cross-check mental health information from reputable organisations.

  • Look for trusted sources such as official medical sites, professional organisations, or licensed practitioners.

  • Be wary of overly definitive answers, especially if the topic relates to health, emotional wellbeing, or personal distress.

  • Tools like Google AI Overviews may be helpful for quick context, but they should not replace careful research or professional support.

This debate is part of a growing global conversation about how artificial intelligence intersects with public health — not just in clinical settings, but in everyday information environments. When technology influences what we think we know about our minds and bodies, accuracy, nuance, and empathy are not optional — they are essential.

If you or someone you know is searching for mental health support or information, prioritise trusted sources and consult professionals when possible — especially if the topic involves distress, self-harm, or emotional crisis

Disclaimer: This content, including any advice shared here, is intended for general informational purposes only. It should not be considered a substitute for professional medical guidance, diagnosis, or treatment. Always seek the advice of a qualified healthcare professional or your personal physician for specific concerns. Lyfsmile does not assume responsibility for the use or interpretation of this information.

Need professional help?

Feeling suicidal or in crisis? Contact a helpline or emergency service immediately.

1. Vandrevala Foundation Helpline:
+91 9999666555 (24x7)

2. Sanjivini (Delhi-based):
011-40769002 (10 am - 5:30 pm)

3. Sneha Foundation (Chennai-based):
044-24640050 (8 am - 10 pm)

4. National Mental Health Helpline: 1800-599-0019

Newsletter

Get the latest mental health news delivered to your inbox.

Unsubscribe anytime. Privacy Policy

If you are in a crisis or any other person may be in danger - don't use this site.
These resources can provide you with immediate help.

LYFSMILE

With Lyfsmile, you can easily schedule online counselling sessions with the best psychologists, counsellors, and therapists in India. With over 120K clients in 70+ Countries, Lyfsmile is providing a safe, secure and confidential space to the clients.

DISCLAIMER

We are not medical healthcare provider or a hotline for suicide prevention. Call a suicide prevention hotline right away if you are experiencing suicidal thoughts, or go to the hospital.

Head Office

B710, Sushant Lok Phase I, Sector 43, Gurugram, Haryana 122007

Branch Office

Plot No 96-A , Block - B , Sector -13, Dwarka, New Delhi -110078

© 2019 - 2026 Lyfsmile | All rights reserved.