**
As individuals increasingly turn to artificial intelligence for health guidance, questions arise regarding the reliability and safety of such advice. From personal anecdotes to expert warnings, the debate around using AI chatbots like ChatGPT, Gemini, and Grok for health-related queries is intensifying. While these technologies promise instant support, their accuracy and potential risks are under scrutiny.
The Growing Popularity of AI for Health Queries
In an age where accessing a GP can be a daunting task, AI chatbots offer an appealing alternative for many, including those like Abi from Manchester. Struggling with health anxiety, Abi has found solace in consulting ChatGPT for tailored health advice. “It allows a kind of problem solving together,” she noted, likening her experience to a conversation with a doctor. However, as the use of AI expands, it raises critical concerns about the quality of information being provided.
Abi’s experiences illustrate both the advantages and pitfalls of relying on AI for health-related inquiries. When she suspected a urinary tract infection, ChatGPT directed her to a pharmacist, leading to a timely antibiotic prescription. This interaction highlighted the chatbot’s potential to facilitate healthcare access without burdening the NHS. Yet, this positive experience was overshadowed by a more alarming incident. After a hiking accident that caused severe back pain, an urgent recommendation from ChatGPT to seek immediate A&E care proved to be a misjudgement, as Abi later realised she was not critically injured.
The Accuracy Dilemma: Human Interaction versus AI Responses
The discrepancy in AI’s performance raises significant questions about its reliability. A recent study from the Reasoning with Machines Laboratory at the University of Oxford revealed that while chatbots achieved an impressive 95% accuracy when presented with complete medical scenarios crafted by doctors, this accuracy plummeted to just 35% during real-world interactions with users. The study found that the nuances of human dialogue often led to miscommunication, resulting in incorrect diagnoses and advice.
Professor Adam Mahdi, who conducted the research, explained that conversational dynamics—such as incomplete information and distractions—can severely impact the accuracy of AI responses. In one disturbing example, a user described stroke symptoms, but the chatbot’s advice varied drastically based on the details provided, potentially endangering lives. In contrast, users who conducted traditional internet searches were more likely to consult reliable sources, such as NHS websites, yielding better-informed results.
Expert Opinions: Caution in AI Utilisation
Experts are voicing concerns about the implications of AI in health advice. Professor Chris Whitty, England’s Chief Medical Officer, underscored the urgency of this issue, stating that while people are increasingly relying on AI, the quality of responses remains alarmingly inadequate. “Answers are not good enough,” he remarked, noting the confidence with which chatbots deliver incorrect information.
Dr Margaret McCartney, a Glasgow-based GP, highlighted the psychological impact of interacting with chatbots. Users may perceive a personalised relationship with AI, which can skew their interpretation of the advice received. In contrast to traditional searches that present a variety of sources and reliability indicators, AI interactions create an illusion of authority and immediacy, potentially leading to misguided trust.
Adding to the discourse, a recent analysis by The Lundquist Institute for Biomedical Innovation in California found that over half of the responses from various AI chatbots contained problematic elements. In tests concerning topics like cancer treatment and nutrition, several chatbots offered misleading information, demonstrating a critical flaw in their design. Lead researcher Dr Nicholas Tiller emphasised the importance of scepticism when engaging with AI-generated health advice, suggesting that users should verify information independently, much like they would with any source of medical information.
The Future of AI in Healthcare: A Cautious Approach
OpenAI, the company behind ChatGPT, acknowledges the growing reliance on AI for health information and is taking steps to ensure the safety and reliability of its responses. They emphasise that while improvements are being made, AI should never replace professional medical advice.
Abi continues to utilise AI chatbots for health queries but advises caution: “I wouldn’t trust that anything it’s saying is absolutely right.” Her experiences underscore the necessity of critical thinking and discernment when engaging with AI technologies.
Why it Matters
The increasing dependence on AI chatbots for health advice signals a transformative shift in how individuals access information and manage their health. While the allure of instant, personalised support is undeniable, the implications for public health are profound. Misguided trust in AI could lead to dangerous misdiagnoses and inadequate care, ultimately undermining the very system it seeks to complement. As technology evolves, society must ensure that the integration of AI in healthcare prioritises safety, accuracy, and informed decision-making to safeguard public health.