In a significant retreat, Google has discontinued its “What People Suggest” feature, which provided crowdsourced health advice from individuals sharing personal experiences. This decision highlights the growing scrutiny surrounding the tech giant’s use of artificial intelligence to dispense health-related information. The feature, which aimed to harness collective wisdom to enhance user experience, has been quietly shelved, prompting questions about the safety and reliability of such initiatives.
A Shift in Strategy
Launched with the ambition of transforming health outcomes through AI, “What People Suggest” was designed to present users with insights from those with similar medical conditions. Google positioned this feature as a way to complement professional medical advice with user-generated content. However, the company has now confirmed that this feature is no longer active, attributing its removal to a broader effort to streamline its search results layout rather than concerns over safety or effectiveness.
A spokesperson for Google stated, “This feature was turned down months ago as part of a broader simplification of the search results page.” However, the lack of clarity regarding the exact timing and reasoning behind the discontinuation has raised eyebrows among industry observers.
Controversies Surrounding AI Health Information
The move comes at a precarious time for Google, already under fire for its AI-generated health summaries. A recent investigation revealed that misleading information provided by Google’s AI could potentially endanger users seeking reliable health advice. These AI-generated summaries are displayed to an estimated 2 billion users each month, appearing prominently above traditional search results, which has raised concerns about their accuracy.

In early 2026, findings from a report conducted by a major publication revealed that users were at risk due to the prevalence of false and misleading health information in Google’s AI Overviews. Although Google initially defended the feature by asserting it linked to reputable sources and encouraged users to seek professional advice, it soon took action by removing AI Overviews for certain medical queries.
The Rise and Fall of User-Driven Health Insights
Introduced at a health-focused event in March 2025, “What People Suggest” was promoted by Karen DeSalvo, Google’s then-chief health officer. She expressed the need for users to access information from individuals with similar health challenges, suggesting that personal narratives could complement expert advice. The feature was initially rolled out on mobile devices in the United States, reflecting Google’s ambition to innovate healthcare communication through technology.
Yet, the reality of user-generated content in sensitive areas like health is fraught with challenges. The potential for misinformation may outweigh the benefits of shared experiences, leading to the feature’s eventual demise. One insider remarked, “It’s dead,” signalling a clear end to the initiative.
Future Directions for Google Health Initiatives
As Google grapples with these recent developments, the company is set to host its next health event, “The Check Up,” where it aims to highlight advancements in AI research and collaborative efforts to tackle pressing health challenges. Chief health officer Michael Howell is expected to discuss new technological innovations and partnerships that may redefine how users access health information.

The removal of “What People Suggest” may signal a shift in Google’s approach to health-related features, focusing on more reliable and expert-driven content rather than crowdsourced advice.
Why it Matters
The discontinuation of the “What People Suggest” feature underscores a crucial intersection between technology and healthcare. As Google navigates the complexities of providing health information, it raises an important dialogue about the responsibility of tech companies to ensure user safety. The challenge lies in balancing innovative AI applications with the need for accurate, trustworthy health advice. In an era where misinformation can have tangible consequences, it is imperative that tech giants prioritise public health over experimental features. This decision may pave the way for a more cautious, yet responsible, approach to health information dissemination in the future.