Navigating the Risks: The Growing Role of AI Chatbots in Health Advice

Emily Watson, Health Editor
6 Min Read
⏱️ 4 min read

As digital health tools continue to evolve, many individuals are turning to AI chatbots for medical guidance. While these platforms can provide quick answers, the question remains: how reliable are they when it comes to health-related inquiries? Users like Abi from Manchester have found AI chatbots like ChatGPT helpful, but recent scrutiny highlights both the potential benefits and pitfalls of relying on artificial intelligence for medical advice.

The Appeal of AI for Health Management

In an age when accessing a healthcare professional can often feel daunting, chatbots present a convenient alternative. Abi, who experiences health anxiety, appreciates the tailored responses from AI compared to traditional internet searches, which can sometimes lead to overwhelming and frightening outcomes. “It allows a kind of problem solving together,” she shares, likening the experience to a conversation with a doctor.

Over the past year, Abi has turned to ChatGPT for various health concerns. For instance, when she suspected a urinary tract infection, the chatbot advised her to consult a pharmacist, leading to a timely prescription. This interaction alleviated her concerns about burdening the NHS with minor issues and provided her with confidence in her decision-making.

The Flip Side of AI Health Advice

However, not all experiences have been positive. In January, following an unfortunate fall during a hike, Abi sought assistance from her AI companion after sustaining a painful back injury. The chatbot suggested she might have punctured an organ and should visit an emergency department immediately. After three hours in A&E, she discovered her injury was not as severe as the AI had indicated. “The AI had clearly got it wrong,” she reflects.

This incident raises significant concerns about the reliability of AI-generated health recommendations. The rise in AI usage for medical queries is undeniable, yet experts caution that the accuracy of such advice is not guaranteed.

Expert Opinions on AI Chatbots’ Reliability

Professor Sir Chris Whitty, England’s Chief Medical Officer, has expressed concerns regarding the quality of health information provided by AI. He noted that while these tools are increasingly popular, the responses they generate are often “not good enough” and can be misleading. “They can be both confident and wrong,” he warned, highlighting the danger of placing too much trust in technology that may not fully understand the nuances of medical conditions.

Research from the Reasoning with Machines Laboratory at the University of Oxford further illustrates this point. In a controlled study, when given complete medical scenarios, chatbots demonstrated a remarkable 95% accuracy rate. However, when individuals interacted with the AI to seek diagnoses, the accuracy plummeted to just 35%. The researchers noted that human interactions often lead to incomplete information being shared, which can result in incorrect advice.

Dr Margaret McCartney, a general practitioner in Glasgow, emphasises the difference between AI summarising information and the analytical process involved in conducting a traditional internet search. She argues that while chatbots may create a perception of a personalised experience, users must remain vigilant about the reliability of the information they receive.

The Challenge of Misinformation

Recent studies from The Lundquist Institute for Biomedical Innovation in California have also highlighted the potential for AI chatbots to disseminate misinformation. During their analysis, researchers posed deliberately tricky questions to various AI platforms, and over half of the answers were deemed problematic. For example, when inquiring about alternative cancer treatments, an AI suggested naturopathy, which could mislead users seeking legitimate medical guidance.

Dr Nicholas Tiller, the lead researcher, points out the inherent issue with the technology. Chatbots are designed to generate confident responses based on language patterns, which may not correlate with factual accuracy. “If you are asking anybody in the street a question, and they gave you a very confident answer, would you just believe them?” he questions, urging caution among users.

OpenAI, the company behind ChatGPT, acknowledges the growing demand for health information through AI. They assert that improvements have been made to enhance the reliability of their responses, yet they maintain that AI should not replace professional medical advice.

Conclusion

Abi continues to utilise AI chatbots, but she advises others to approach the information with caution. “I wouldn’t trust that anything it’s saying is absolutely right,” she warns, reminding users that these tools can and do make mistakes.

Why it Matters

As more individuals turn to AI for health advice, it is crucial to recognise both the potential benefits and the inherent risks. While AI chatbots can offer immediate support and guidance, they are not a substitute for professional medical care. Understanding their limitations is essential for anyone seeking health advice in this rapidly evolving digital landscape. Users should remain informed and critical, ensuring they complement AI interactions with traditional sources of medical expertise.

Share This Article
Emily Watson is an experienced health editor who has spent over a decade reporting on the NHS, public health policy, and medical breakthroughs. She led coverage of the COVID-19 pandemic and has developed deep expertise in healthcare systems and pharmaceutical regulation. Before joining The Update Desk, she was health correspondent for BBC News Online.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy