**
As artificial intelligence continues to integrate into daily life, many individuals are turning to AI chatbots for health-related queries. A recent examination of this trend raises critical questions about the reliability of such technology. With the compelling convenience of around-the-clock access to information, individuals are left weighing the benefits against potential risks—especially when it comes to their health.
The Allure of AI for Health Management
For many, the health care system can feel daunting and inaccessible. Long waiting times for appointments and a shortage of available medical professionals have motivated individuals like Abi, a Manchester resident, to seek answers from AI chatbots such as ChatGPT. For a year, Abi has used the chatbot to manage her health concerns, finding its tailored responses preferable to traditional internet searches that often lead to frightening possibilities.
“It allows a kind of problem solving together,” Abi noted, likening her interactions with the AI to consulting with a doctor. In one instance, when she suspected a urinary tract infection, ChatGPT advised her to visit a pharmacist. This guidance led to a timely antibiotic prescription, alleviating her concerns about unnecessarily burdening the NHS.
However, the experience has not been universally positive. After a hiking accident that resulted in severe back pain, Abi consulted the chatbot again. To her alarm, ChatGPT suggested that she had potentially punctured an organ and needed immediate hospital care. After three hours in the emergency department, she learned that her condition was not life-threatening—an instance where the AI had gravely misdiagnosed her situation.
The Dangers of Misinformation
The growing popularity of AI chatbots for health advice has raised alarms among medical professionals. England’s Chief Medical Officer, Prof Sir Chris Whitty, has expressed concern that while many people rely on these tools, the quality of the information they provide is often lacking. “We’re at a particularly tricky point because people are using them,” he stated, pointing out that the answers delivered by these chatbots are frequently “not good enough” and can be “both confident and wrong.”
Research from the University of Oxford’s Reasoning with Machines Laboratory highlights the stark difference in accuracy when AI chatbots assess health scenarios based on comprehensive information versus user interactions. When given complete data, chatbots achieved a remarkable 95% accuracy rate. However, as individuals engaged in conversation to seek a diagnosis, accuracy plummeted to just 35%. This drop underscores the complexities of human-AI interaction, where omitted details and distractions lead to misdiagnoses.
The Challenge of Trusting AI
Dr. Margaret McCartney, a GP in Glasgow, elaborates on the inherent differences between engaging with a chatbot and conducting a traditional internet search. “It seems like you’re having a personal relationship with a chatbot,” she remarked, whereas a search engine provides various sources that can indicate reliability. This perceived intimacy can skew users’ interpretations of the advice received, potentially leading them to trust the chatbot’s recommendations without sufficient scrutiny.
A recent study by The Lundquist Institute for Biomedical Innovation also illustrates the potential for AI to propagate misinformation. When researchers posed challenging health-related questions to various chatbots—including Gemini, DeepSeek, and ChatGPT—more than half of the responses were deemed problematic. In one instance, a chatbot suggested that naturopathy could be a viable alternative treatment for cancer, rather than clarifying that no alternative clinic has been proven to treat the disease effectively.
Striking a Balance
The rapid advancement of chatbot technology complicates the evaluation of its reliability. As these tools evolve, the information they provide may improve, but researchers like Dr. Nicholas Tiller warn that there remains a fundamental issue: the algorithms are designed to generate text based on language patterns rather than to offer accurate medical advice. Tiller advocates for caution, suggesting that users should approach AI-generated health information with a critical mindset. “You wouldn’t just take anyone’s confident answer at face value,” he cautions, emphasising the need for verification.
OpenAI, the company behind ChatGPT, acknowledges the growing use of its technology for health inquiries and asserts its commitment to improving response reliability. Despite advancements, the company reiterates that AI chatbots should serve as supplementary sources of information, rather than replacements for professional medical advice.
Abi concurs, advising users to maintain a healthy scepticism when interacting with AI chatbots. “I wouldn’t trust that anything it’s saying is absolutely right,” she cautioned, highlighting the precarious nature of relying on technology for health decisions.
Why it Matters
The increasing reliance on AI chatbots for health guidance embodies a significant shift in how individuals seek medical information. While these tools offer unprecedented convenience, their propensity for misinformation poses serious risks, particularly in critical health scenarios. As the public navigates this new landscape, understanding the limitations of AI in healthcare is vital. Users must remain vigilant, ensuring that they complement AI interactions with professional medical advice to safeguard their health and well-being.