Google Pulls the Plug on Controversial Health Advice Feature Amid AI Scrutiny

Alex Turner, Technology Editor
5 Min Read
⏱️ 4 min read

In a surprising turn of events, Google has decided to discontinue its “What People Suggest” feature, which aimed to offer crowdsourced health advice from users worldwide. Initially touted as a groundbreaking application of artificial intelligence to enhance health outcomes, the feature has now been quietly shelved, reflecting the growing concerns around the reliability of AI-driven health information.

What Went Wrong?

The decision to scrap the “What People Suggest” tool comes amid increasing scrutiny regarding Google’s use of AI in providing health-related information. A spokesperson confirmed that the feature had been phased out as part of a larger effort to simplify its search interface. Importantly, they asserted that the move was not linked to any issues of quality or safety regarding the feature. However, the timing raises eyebrows, especially in light of investigations that have revealed misleading health advice circulating through Google’s AI Overviews, which reach approximately 2 billion users each month.

In January, a report by The Guardian highlighted that many users were being put at risk due to inaccurate health information being disseminated through these AI-generated summaries. While Google initially attempted to downplay the findings, asserting that the AI Overviews referenced reputable sources and encouraged users to seek professional medical advice, the backlash was significant. Consequently, the company removed AI Overviews for certain medical queries but not across the board.

A Brief History of “What People Suggest”

Launched in March of the previous year during an event in New York, “What People Suggest” was positioned as an innovative tool that would allow users to access insights from individuals with similar medical experiences. Karen DeSalvo, then Google’s Chief Health Officer, described the feature as a way to facilitate connections between users facing similar health challenges, stating, “While people come to search to find reliable medical information from experts, they also value hearing from others who have similar experiences.”

A Brief History of “What People Suggest”

The intention was clear: to harness AI capabilities to sift through online discussions and present users with digestible themes that encapsulate real-life experiences. For example, someone suffering from arthritis could quickly discover how others manage their condition, with links for further exploration. Initially made available on mobile devices in the US, the feature garnered mixed reactions from users and experts alike.

The Future of AI in Health Information

As Google prepares for its upcoming “The Check Up” event, expectations are high for what the tech giant plans to unveil next. Chief Health Officer Michael Howell and other key staff are set to discuss how new AI research and technological advancements will tackle pressing global health challenges. However, with the recent troubles surrounding AI-driven health advice, Google will need to tread carefully, ensuring that future innovations do not compromise user safety or well-being.

The spokesperson reiterated that Google remains committed to helping users find trustworthy health information from a variety of sources, including personal narratives that users find valuable. Yet, the withdrawal of “What People Suggest” serves as a cautionary tale about the complexities and responsibilities of integrating AI into sensitive domains like healthcare.

Why it Matters

The discontinuation of the “What People Suggest” feature underscores a pivotal moment in the relationship between technology and health information. As tech companies increasingly integrate AI into everyday tools, the responsibility to provide accurate and safe advice becomes paramount. The potential for misinformation can have serious consequences, especially in the health sector. As users increasingly turn to digital platforms for guidance, ensuring that these platforms foster reliable communication becomes essential. Google’s decision to retract this feature may indicate a shift towards a more cautious and responsible approach to AI in healthcare, which ultimately benefits users and providers alike.

Why it Matters
Share This Article
Alex Turner has covered the technology industry for over a decade, specializing in artificial intelligence, cybersecurity, and Big Tech regulation. A former software engineer turned journalist, he brings technical depth to his reporting and has broken major stories on data privacy and platform accountability. His work has been cited by parliamentary committees and featured in documentaries on digital rights.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy