In a significant move prompted by concerning revelations regarding misleading health information in Google’s AI Overviews, the mental health charity Mind has announced a comprehensive inquiry into the intersection of artificial intelligence and mental health. This initiative aims to scrutinise the potential dangers and necessary safeguards as AI continues to play a more prominent role in the lives of individuals facing mental health challenges.
Inquiry Launched Amidst Safety Concerns
The inquiry, which is the first of its kind globally, will span a year and will convene leading medical professionals, mental health advocates, and individuals with personal experiences of mental health issues. The objective is to foster a safer digital environment for mental health support, focusing on the imperative for robust regulations and standards. This initiative comes in response to a recent investigation by The Guardian, which highlighted how Google’s AI Overviews disseminated dangerously misleading medical guidance to millions of users.
Mind’s Chief Executive Officer, Dr Sarah Hughes, emphasised the necessity of addressing the risks associated with AI-generated health information. She stated, “We believe AI has enormous potential to improve the lives of people with mental health problems, widen access to support, and strengthen public services. But that potential will only be realised if it is developed and deployed responsibly, with safeguards proportionate to the risks.”
The Role of AI in Mental Health Information
The investigation revealed that Google’s AI Overviews, which provide succinct summaries of health topics, are viewed by approximately two billion users monthly. However, the findings indicated that these summaries often contained inaccurate information, posing serious risks to individuals seeking reliable health advice. The charity pointed out that, despite Google removing AI Overviews for certain medical queries, erroneous guidance persists, particularly concerning mental health issues.

Dr Hughes highlighted the gravity of the situation, noting that vulnerable individuals were often exposed to “dangerously incorrect guidance” that could deter them from seeking necessary treatment, perpetuate stigma, or, in extreme cases, endanger lives. “People deserve information that is safe, accurate and grounded in evidence,” she asserted, urging for an approach that prioritises the well-being of those affected by mental health challenges.
Concerns Over AI’s Reliability
Experts have raised alarms regarding the accuracy of AI-generated content, particularly in relation to sensitive topics such as psychosis and eating disorders. The Guardian’s investigation uncovered instances of harmful advice that could lead individuals to avoid seeking help. As the influence of AI in health information grows, the need for responsible implementation is becoming increasingly urgent.
Mind’s Rosie Weatherley pointed out that prior to the introduction of AI Overviews, individuals searching for mental health information had a better chance of accessing credible resources. She explained, “Users had a good chance of clicking through to a credible health website that answered their query,” which often provided comprehensive and nuanced information. The shift to AI Overviews has, she argued, replaced this richness with overly simplistic summaries that can mislead users about the credibility of the information.
In response to the concerns raised, a Google spokesperson defended the integrity of the AI Overviews, stating that the company invests heavily in ensuring the accuracy of the information provided. They also mentioned efforts to direct users to local crisis hotlines when distress is detected in search queries. However, the spokesperson acknowledged that without specific examples to review, they could not address the accuracy of the flagged information.
The Path Forward
As the inquiry progresses, Mind aims to create an inclusive dialogue that captures the lived experiences of individuals with mental health conditions. This “open space” will allow for a deeper understanding of the interplay between emerging technologies and mental health, fostering an environment where innovation can thrive without compromising safety.

The inquiry represents a critical step toward establishing a framework that balances technological advancement with the urgent need for accurate and dependable mental health resources.
Why it Matters
The implications of this inquiry extend far beyond the realm of AI and technology; they touch upon the very core of public health and individual well-being. As artificial intelligence becomes an increasingly integral part of how people access health information, ensuring the accuracy and safety of that information is paramount. Mind’s initiative not only seeks to safeguard the mental health of countless individuals but also aims to set a precedent for responsible AI deployment in healthcare settings worldwide. This inquiry could pave the way for essential reforms that protect vulnerable populations while harnessing the potential benefits of technological advancements.