Navigating the Risks: Trusting AI Chatbots for Health Advice

Emily Watson, Health Editor
6 Min Read
⏱️ 5 min read

**

In an age where accessing medical guidance can be challenging, many individuals are turning to AI chatbots for health-related queries. While these technologies, such as ChatGPT, Gemini, and Grok, offer 24/7 support, the question remains: can they be relied upon for accurate medical information? A growing number of users, including those like Abi from Manchester, have shared both positive and negative experiences, highlighting the need for caution when consulting these digital assistants.

The Appeal of AI Chatbots

For Abi, health anxiety drives her to seek advice from AI chatbots. “It allows a kind of problem-solving together,” she notes, likening the interaction to a conversation with a doctor. The convenience of not having to wait for a GP appointment is a significant advantage, especially for those who find it challenging to gauge when a visit to the doctor is necessary.

Over the past year, Abi has turned to ChatGPT for various health concerns. In one instance, when she suspected a urinary tract infection, the chatbot advised her to consult a pharmacist, leading to a successful diagnosis and treatment. “It got me the care I needed without feeling like I was taking up NHS time,” she explains, appreciating the ease of access to tailored advice.

The Dangers of Misinformation

However, not all experiences have been positive. During a hiking accident that left Abi with severe back pain, she sought guidance from the AI. The chatbot alarmingly suggested that she might have punctured an organ and needed immediate emergency care. After a lengthy wait in A&E, she discovered that her condition was not as dire as the AI had indicated. “It clearly got it wrong,” she reflects, underscoring the potential risks associated with relying on AI for urgent medical situations.

Abi’s experiences resonate with concerns raised by experts in the medical field. Prof Sir Chris Whitty, England’s Chief Medical Officer, recently stated that while AI chatbots are becoming increasingly popular, the quality of their advice is often insufficient. He warned that these technologies frequently provide confident yet inaccurate responses, which can lead to misdiagnoses and inappropriate care.

Research Findings on AI Accuracy

The University of Oxford’s Reasoning with Machines Laboratory has begun examining the capabilities and limitations of AI chatbots in medical contexts. In controlled tests where doctors created realistic health scenarios, the chatbots demonstrated a remarkable 95% accuracy rate. However, this figure plummeted to just 35% when individuals engaged in typical conversational exchanges with the AI, revealing the complexities of human-AI interaction.

“People share information gradually, which can lead to distractions and omissions,” explains researcher Prof Adam Mahdi. This variability in communication can result in chatbots offering incorrect guidance, particularly in critical scenarios. For instance, a life-threatening condition like a subarachnoid haemorrhage should never be treated with bed rest, yet subtle differences in symptom descriptions can lead to dangerously misleading advice.

The Need for Human Oversight

Dr Margaret McCartney, a Glasgow-based GP, emphasises the distinction between AI-driven advice and traditional internet searches. While chatbots create the illusion of personalised care, they lack the context and reliability that come with reputable medical sources. “With a Google search, you are presented with a range of resources that help you assess reliability,” she explains.

A recent analysis by The Lundquist Institute for Biomedical Innovation found that AI chatbots often disseminate misinformation, especially when posed with challenging questions. In tests involving critical health topics, over half of the responses were deemed problematic. For instance, when asked about alternative cancer treatments, some chatbots confidently suggested naturopathy, despite the lack of scientific backing.

Dr Nicholas Tiller, leading the research, cautions that the inherent design of AI systems—predicting responses based on language patterns—poses fundamental issues when applied to health advice. He advocates for caution, suggesting that users should approach AI-generated information with a critical mindset, much like they would when receiving opinions from unverified individuals.

The Role of AI in Future Healthcare

OpenAI, the company behind ChatGPT, acknowledges the growing reliance on their tool for health inquiries. They are actively collaborating with clinicians to enhance the accuracy and safety of their responses. Nonetheless, they reiterate that AI should serve as an educational resource rather than a substitute for professional medical consultation.

Despite the mixed results, Abi continues to use AI chatbots for health information, advocating for a balanced approach. “I take everything with a pinch of salt,” she advises, urging others to remain sceptical about the accuracy of AI-generated advice. “I wouldn’t trust that anything it’s saying is absolutely right.”

Why it Matters

As AI chatbots become increasingly integrated into our search for health information, it is crucial to remain vigilant about their limitations. While they offer convenience and immediate access to advice, the risks associated with misinformation can have serious consequences. Understanding the role of AI in healthcare and knowing when to seek professional guidance can safeguard individuals from potential harm, ensuring that technology serves as a helpful ally rather than a dangerous substitute.

Share This Article
Emily Watson is an experienced health editor who has spent over a decade reporting on the NHS, public health policy, and medical breakthroughs. She led coverage of the COVID-19 pandemic and has developed deep expertise in healthcare systems and pharmaceutical regulation. Before joining The Update Desk, she was health correspondent for BBC News Online.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy