If you are in a crisis or any other person may be in danger - don't use this site.
These resources can provide you with immediate help.

The UK mental health charity Mind has raised serious concerns about Google’s AI Overviews — the AI-generated summaries that appear above search results and are seen by billions of people every month.
Following an investigation by The Guardian, Google removed AI Overviews for some — but not all — medical searches. However, senior leaders at Mind say serious risks remain.
Dr Sarah Hughes, chief executive of Mind,
warned that “dangerously incorrect” mental health advice was still being presented through Google AI Overviews.
In the most severe instances, she said, misleading information could put lives at risk.
Hughes emphasised that AI holds enormous potential to improve mental health support, widen access, and strengthen services — but only if developed responsibly and with safeguards proportionate to the risks. She stated that the issues uncovered by The Guardian’s reporting are a key reason Mind has launched a commission on AI and mental health to examine risks, opportunities, and necessary protections as AI becomes more embedded in daily life.
Importantly, Hughes stressed that innovation must not come at the expense of wellbeing — and that people with lived experience of mental health problems must be central in shaping digital support systems.
Despite Google describing its AI Overviews as “helpful” and “reliable,” The Guardian investigation reported that some summaries provided inaccurate or misleading medical advice across multiple health areas — including:
Experts quoted in the investigation described certain AI responses on psychosis and eating disorders as “very dangerous advice” that was incorrect, harmful, or capable of discouraging people from seeking professional help.
The report also suggested that safety disclaimers about potential inaccuracies were being downplayed, raising further concerns about user trust. Also read
While Google defends the reliability of its AI summaries, critics and watchdogs continue to highlight how appearing at the top of search results gives these AI answers a false sense of authority. Independent reviews have shown that even when disclaimers are included, they may be difficult to spot — meaning users might trust misleading information without realizing it.
For people searching about mental health issues, this matters deeply. Mistaken guidance about conditions like eating disorders, psychosis, or symptoms of distress can increase anxiety, delay professional help-seeking, or deepen misunderstanding.
Another expert voice, Rosie Weatherley, information content manager at Mind, offered a detailed account of internal testing conducted by her team.
She explained that for over three decades, Google’s traditional search model allowed credible, evidence-based health content to rise to the top. While not perfect, users typically clicked through to reputable sources.
In contrast, AI Overviews now present what she described as clinical-sounding summaries that create an illusion of certainty — often ending the user’s search journey prematurely.
In a 20-minute internal test using common mental health queries, Weatherley said her team encountered alarming outputs within minutes, including AI responses that:
Weatherley stressed that none of these statements are true.
Her concern: when AI presents partial or false answers with confidence, vulnerable individuals may accept them as authoritative — without checking further sources.
Search engines are often the first place people turn during moments of distress. For someone researching psychosis symptoms, eating disorder relapse, paranoia, or emotional crisis, inaccurate guidance can:
Dr Hughes stated that vulnerable people are being served guidance that could discourage them from seeking help or reinforce discrimination — and in worst-case scenarios, put lives at risk.
Her central message:
People deserve information that is safe, accurate, evidence-based — not untested technology delivered with a veneer of confidence.
AI in healthcare is not inherently harmful. As Mind acknowledges, it carries enormous promise. But the controversy highlights a critical tension:
How do we ensure rapid technological innovation does not outpace ethical responsibility — especially in areas affecting vulnerable populations?
Mind’s new commission on AI and mental health aims to answer precisely that.
Unlike traditional search results, where users are taken to multiple trusted sources, AI Overviews synthesize content into single responses that sound definitive even when wrong. Mind describes this as particularly risky for people who might be in emotional distress or looking for mental health guidance — those who may rely on quick answers when they are already struggling.
In response to these concerns, Mind has launched a year-long commission, bringing together mental health professionals, people with lived experience, policymakers and tech firms to evaluate AI’s impact on mental wellbeing and to propose safeguards and standards for responsible AI use.
Mind’s CEO emphasised that inaccurate information could delay treatment or reinforce stigma, underscoring why evidence-based, empathetic guidance is vital.
AI summaries are not a substitute for expert advice. Always cross-check mental health information from reputable organisations.
Look for trusted sources such as official medical sites, professional organisations, or licensed practitioners.
Be wary of overly definitive answers, especially if the topic relates to health, emotional wellbeing, or personal distress.
Tools like Google AI Overviews may be helpful for quick context, but they should not replace careful research or professional support.
This debate is part of a growing global conversation about how artificial intelligence intersects with public health — not just in clinical settings, but in everyday information environments. When technology influences what we think we know about our minds and bodies, accuracy, nuance, and empathy are not optional — they are essential.
If you or someone you know is searching for mental health support or information, prioritise trusted sources and consult professionals when possible — especially if the topic involves distress, self-harm, or emotional crisis
Disclaimer: This content, including any advice shared here, is intended for general informational purposes only. It should not be considered a substitute for professional medical guidance, diagnosis, or treatment. Always seek the advice of a qualified healthcare professional or your personal physician for specific concerns. Lyfsmile does not assume responsibility for the use or interpretation of this information.
1. Vandrevala Foundation Helpline:
+91 9999666555 (24x7)
2. Sanjivini (Delhi-based):
011-40769002 (10 am - 5:30 pm)
3. Sneha Foundation (Chennai-based):
044-24640050 (8 am - 10 pm)
4. National Mental Health Helpline: 1800-599-0019
Latest News
Editor's Picks
Newsletter
Get the latest mental health news delivered to your inbox.
Unsubscribe anytime. Privacy Policy