In a significant retreat from its ambitions to harness artificial intelligence for health guidance, Google has discontinued its “What People Suggest” feature. Initially conceived as a means to provide users with crowdsourced health advice from individuals with similar experiences, the decision to scrap the tool highlights the growing concerns surrounding the reliability of AI-generated medical information. This move reflects the broader challenges tech companies face in ensuring user safety while leveraging advanced technologies.
The Rise and Fall of “What People Suggest”
Google introduced “What People Suggest” during its “The Check Up” event in March 2025, aimed at enhancing user access to health-related insights. The feature allowed users to glean advice from strangers who shared their health journeys, thereby promoting a sense of community among individuals facing similar medical conditions. Karen DeSalvo, then Google’s chief health officer, heralded the initiative as a transformative step in the way people could access health information, emphasizing the value of peer experiences alongside expert advice.
However, the feature has faced substantial backlash, particularly following a Guardian investigation in January 2026 that revealed users were often exposed to misleading health information. This scrutiny intensified as the investigation uncovered that approximately two billion users were being served AI-generated health summaries, which some experts deemed potentially harmful. In response to these findings, Google removed the AI Overviews for certain medical queries, although concerns about the feature lingered.
Google’s Justification and Future Plans
Despite the uproar, a Google spokesperson asserted that the discontinuation of “What People Suggest” was part of a broader initiative to simplify the search results page, rather than a direct response to safety concerns. The spokesperson insisted that the decision was not related to the quality of the feature and reiterated the company’s commitment to providing reliable health information from a variety of sources, including first-person accounts.
This latest development comes as Google prepares for its upcoming “The Check Up” event, where the company aims to showcase new innovations and partnerships designed to address pressing global health issues. Chief health officer Michael Howell and other officials are expected to discuss how the company plans to integrate new AI research into its health offerings, despite the recent setbacks.
The Implications of Google’s Decision
The withdrawal of “What People Suggest” raises critical questions about the role of technology in healthcare. As Google and other tech giants strive to innovate in this domain, the conversation surrounding the quality and safety of AI-generated medical advice becomes increasingly pertinent. The backlash against the feature underscores the importance of balancing technological advancement with user safety and trust.
Why it Matters
The discontinuation of “What People Suggest” is not merely a corporate setback; it reflects a crucial moment in the intersection of technology and healthcare. As users increasingly rely on digital platforms for health information, the responsibility of tech companies to ensure accuracy and reliability cannot be overstated. Google’s retreat serves as a reminder that while AI presents vast potential, it must be wielded with caution, particularly in contexts where misinformation can lead to serious consequences for public health.
