Google Pulls the Plug on Crowdsourced Health Advice Feature: What’s Next?

Alex Turner, Technology Editor
5 Min Read
⏱️ 4 min read

In a surprising turn of events, Google has decided to discontinue its controversial “What People Suggest” feature, which aimed to provide users with health advice sourced from individuals sharing their personal experiences. This move comes amidst increasing scrutiny over the quality of health information disseminated by the tech giant, and it raises significant questions about the role of artificial intelligence in our healthcare decisions.

The Rise and Fall of “What People Suggest”

Launched with the promise of connecting users to health insights from people with similar conditions, “What People Suggest” was initially hailed as a groundbreaking way to enhance the search experience. Google touted this feature as a means to tap into the collective wisdom of users, enabling those with ailments like arthritis to discover practical advice from others navigating similar challenges.

In a blog post by Karen DeSalvo, then Google’s chief health officer, she emphasised the need for reliable medical information while acknowledging the value of peer experiences. “While people come to search for expert opinions, they also appreciate insights from those who have lived through similar situations,” she wrote. This sentiment encapsulated the essence of the feature, aiming to blend professional advice with real-world experiences.

However, the excitement surrounding this initiative was short-lived. According to sources close to the matter, the feature has now been quietly axed, signalling a dramatic pivot in Google’s approach to health-related information.

Scrutiny Over AI in Healthcare

The decision to scrap “What People Suggest” comes on the heels of a Guardian investigation that raised alarms about the potential dangers of misleading health information generated by Google’s AI. The findings suggested that millions of users could be at risk due to inaccurate summaries shown in Google AI Overviews, which are prominently displayed above regular search results and reach an astounding two billion users monthly.

Scrutiny Over AI in Healthcare

Initially, Google attempted to downplay these concerns, maintaining that the AI-generated summaries linked to reputable sources and encouraged users to seek professional advice. Yet, the revelation that the “What People Suggest” feature has been discontinued suggests a growing recognition of the risks associated with crowdsourced health information.

A Shift Towards Simplicity

In an official statement, a Google spokesperson confirmed the removal of the feature, asserting it was part of a broader effort to simplify the search results page. “This decision had nothing to do with the quality or safety of the feature,” they stated. Instead, it reflects a strategic move towards streamlining information delivery on the platform.

However, when pressed for details on where this simplification was communicated, the spokesperson pointed to a blog post from November that failed to mention “What People Suggest” at all. This lack of transparency raises further questions about the company’s communication strategy as it navigates the complexities of AI in health.

Looking Ahead: Google’s Health Initiatives

As Google prepares for its next “Check Up” event, where it will unveil new AI advancements and partnerships aimed at tackling pressing health issues, the company is likely to face even more scrutiny. Chief health officer Michael Howell is set to lead discussions on how technology can support global health improvements. The focus will be on balancing innovative AI research with the imperative of ensuring user safety and trust.

Looking Ahead: Google’s Health Initiatives

Why it Matters

The discontinuation of the “What People Suggest” feature highlights the delicate balance that tech companies must maintain when integrating AI into sensitive areas like healthcare. As users increasingly turn to online platforms for health advice, the responsibility to provide accurate and reliable information becomes critical. This incident serves as a reminder that while technology has the power to transform healthcare access and outcomes, it also carries significant risks that must be carefully managed to protect users and uphold public trust.

Share This Article
Alex Turner has covered the technology industry for over a decade, specializing in artificial intelligence, cybersecurity, and Big Tech regulation. A former software engineer turned journalist, he brings technical depth to his reporting and has broken major stories on data privacy and platform accountability. His work has been cited by parliamentary committees and featured in documentaries on digital rights.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy