As the world’s most popular search engine, Google has long been a go-to source for health-related queries. But the rapid rollout of the company’s AI-powered “AI Overview” feature is raising alarm bells among experts, who warn that the tool could be putting public health at risk.
The AI Overview feature, launched in the US in 2024 and expanded to over 200 countries by 2025, aims to provide users with concise, conversational answers to their queries. However, a Guardian investigation has uncovered numerous instances where the AI-generated summaries have presented inaccurate or misleading health information, which could have serious consequences for users.
In one “really dangerous” case, Google wrongly advised people with pancreatic cancer to avoid high-fat foods – the exact opposite of what should be recommended. In another “alarming” example, the company provided bogus information about crucial liver function tests, which could lead to seriously ill patients wrongly believing they are healthy.
Experts say that when it comes to health, accuracy and context are essential, and the AI-powered overviews are simply not up to the task. “With AI Overview, users no longer encounter a range of sources that they can compare and critically assess,” says Hannah van Kolfschooten, a researcher in AI, health and law at the University of Basel. “Instead, they are presented with a single, confident, AI-generated answer that exhibits medical authority.”
The problem is compounded by the fact that AI Overview often cites YouTube as a primary source, despite the platform’s lack of medical expertise and oversight. “YouTube is not a medical publisher,” the researchers warn. “Anyone can upload content there, including wellness influencers and life coaches with no medical training at all.”
Google has acknowledged some issues with the accuracy of its AI-powered health advice, and has removed certain summaries flagged by the Guardian. However, experts say the company needs to do more to address the broader concerns around the use of AI in providing medical information.
“There are still too many examples out there of Google AI Overview giving people inaccurate health information,” says Sue Farrington, the chair of the Patient Information Forum. “The biggest worry is that bogus and dangerous medical information or advice in AI Overview ends up getting translated into the everyday practices, routines and life of a patient, even in adapted form. In healthcare, this can turn into a matter of life and death.”
As the use of AI in healthcare continues to grow, it’s clear that companies like Google must tread carefully and prioritise accuracy and safety above all else. The stakes are simply too high to get it wrong.