In a surprising turn of events, Google has decided to discontinue its controversial “What People Suggest” feature, which aimed to provide users with health advice sourced from individuals sharing their personal experiences. This move comes amidst increasing scrutiny over the quality of health information disseminated by the tech giant, and it raises significant questions about the role of artificial intelligence in our healthcare decisions.
The Rise and Fall of “What People Suggest”
Launched with the promise of connecting users to health insights from people with similar conditions, “What People Suggest” was initially hailed as a groundbreaking way to enhance the search experience. Google touted this feature as a means to tap into the collective wisdom of users, enabling those with ailments like arthritis to discover practical advice from others navigating similar challenges.
In a blog post by Karen DeSalvo, then Google’s chief health officer, she emphasised the need for reliable medical information while acknowledging the value of peer experiences. “While people come to search for expert opinions, they also appreciate insights from those who have lived through similar situations,” she wrote. This sentiment encapsulated the essence of the feature, aiming to blend professional advice with real-world experiences.
However, the excitement surrounding this initiative was short-lived. According to sources close to the matter, the feature has now been quietly axed, signalling a dramatic pivot in Google’s approach to health-related information.
Scrutiny Over AI in Healthcare
The decision to scrap “What People Suggest” comes on the heels of a Guardian investigation that raised alarms about the potential dangers of misleading health information generated by Google’s AI. The findings suggested that millions of users could be at risk due to inaccurate summaries shown in Google AI Overviews, which are prominently displayed above regular search results and reach an astounding two billion users monthly.

Initially, Google attempted to downplay these concerns, maintaining that the AI-generated summaries linked to reputable sources and encouraged users to seek professional advice. Yet, the revelation that the “What People Suggest” feature has been discontinued suggests a growing recognition of the risks associated with crowdsourced health information.
A Shift Towards Simplicity
In an official statement, a Google spokesperson confirmed the removal of the feature, asserting it was part of a broader effort to simplify the search results page. “This decision had nothing to do with the quality or safety of the feature,” they stated. Instead, it reflects a strategic move towards streamlining information delivery on the platform.
However, when pressed for details on where this simplification was communicated, the spokesperson pointed to a blog post from November that failed to mention “What People Suggest” at all. This lack of transparency raises further questions about the company’s communication strategy as it navigates the complexities of AI in health.
Looking Ahead: Google’s Health Initiatives
As Google prepares for its next “Check Up” event, where it will unveil new AI advancements and partnerships aimed at tackling pressing health issues, the company is likely to face even more scrutiny. Chief health officer Michael Howell is set to lead discussions on how technology can support global health improvements. The focus will be on balancing innovative AI research with the imperative of ensuring user safety and trust.

Why it Matters
The discontinuation of the “What People Suggest” feature highlights the delicate balance that tech companies must maintain when integrating AI into sensitive areas like healthcare. As users increasingly turn to online platforms for health advice, the responsibility to provide accurate and reliable information becomes critical. This incident serves as a reminder that while technology has the power to transform healthcare access and outcomes, it also carries significant risks that must be carefully managed to protect users and uphold public trust.