A Google search feature that used artificial intelligence to extract and organize health advice from internet forum discussions has been removed from the platform. “What People Suggest” presented community-sourced health insights in themed summaries to users conducting medical searches. Its removal was confirmed by three sources with direct knowledge and later acknowledged by Google’s communications team.
The feature was announced at Google’s annual health event in New York, where it was championed by Karen DeSalvo, then the company’s chief health officer. DeSalvo wrote that the feature would help users access the experiences of people managing similar health conditions, noting that peer perspectives are as valuable as clinical information for many searchers. The tool was deployed initially to mobile users in the United States.
Google’s explanation for the removal focused on search page simplification and excluded safety as a factor. However, the company’s evidence of public disclosure — a blog post by a Switzerland-based employee — made no mention of the discontinued feature. Critics questioned whether Google’s account of the decision was complete and accurate.
The removal is compounded by concerns already raised about the accuracy of Google’s AI Overviews. An investigation found that these AI-generated health summaries, shown to approximately two billion users monthly, contained false and misleading medical information. Google made limited adjustments following the investigation, but the broader issue of AI health misinformation on its search platform has not been fully resolved.
With a new health event on the horizon, Google has an opportunity to begin restoring trust in its approach to health AI. But doing so will require more than impressive new announcements — it will require honest engagement with what has failed and a concrete commitment to doing better. “What People Suggest” is a clear case study in what that means in practice.