**
In a bold move prompted by alarming findings regarding misleading health information in Google’s AI Overviews, the UK mental health charity Mind is initiating a comprehensive inquiry into the intersection of artificial intelligence and mental health. The investigation, which is set to span one year, aims to scrutinise the risks associated with AI and develop necessary safeguards in an increasingly digital world where millions rely on technology for health information.
The Catalyst for Change
The inquiry follows a revealing investigation by The Guardian, which uncovered that Google’s AI Overviews, viewed by around two billion users each month, have been disseminating “very dangerous” medical advice. This alarming trend raises concerns about the potential repercussions of inaccurate health information, particularly in a climate where mental health issues are increasingly prevalent.
Dr. Sarah Hughes, CEO of Mind, highlighted the gravity of the situation, stating that the AI-generated summaries could threaten lives by providing dangerously incorrect guidance. Despite Google’s insistence on the reliability of its AI Overviews, the findings suggest a serious disconnect between the company’s claims and the reality faced by vulnerable individuals seeking help.
The Inquiry’s Objectives
Mind’s inquiry represents a pioneering effort, aiming to bring together leading medical professionals, mental health advocates, and technology experts to foster a safer online environment for mental health support. This initiative seeks to address the pressing need for robust regulations and standards as AI technologies become more embedded in our daily lives.

Hughes remarked on the potential benefits of AI, recognising its capability to enhance mental health services and widen access to care. However, she stressed that this potential must be harnessed responsibly, with appropriate safeguards in place to ensure that innovation does not compromise public well-being.
The Risks of Misinformation
The investigation revealed a troubling pattern in the AI Overviews, which provided inaccurate and sometimes harmful advice across various health topics, including mental health conditions, cancer, and eating disorders. Experts noted that some AI-generated content could lead individuals to avoid seeking professional help or reinforce negative stigma surrounding mental health issues.
Rosie Weatherley, Mind’s information content manager, pointed out that while traditional search methods were not without their flaws, they typically led users to credible sources. In stark contrast, AI Overviews replaced nuanced information with overly simplistic summaries that lack context and credibility, thereby risking users’ trust and safety.
Google’s Response and Ongoing Challenges
In light of the revelations, Google has taken some steps to mitigate risks, including the removal of AI Overviews from certain medical searches. However, concerns persist about the ongoing dissemination of misleading health information. Google maintains that it invests heavily in the accuracy of its AI Overviews, claiming that the majority of these summaries are reliable. Yet, the disparity between company assurances and user experiences highlights a broader issue within the tech industry regarding accountability for the information disseminated by AI systems.

Why it Matters
As the digital landscape continues to evolve, the implications of AI on public health, particularly mental health, cannot be overstated. This inquiry by Mind is not just a response to a crisis; it represents a crucial step towards ensuring that technological advancements do not jeopardise the safety and well-being of individuals seeking support. The outcome of this investigation could set important precedents for how AI is regulated and utilised in health contexts, ultimately shaping a future where digital health resources bolster, rather than endanger, public health.