Experts Warn: AI Chatbots Could Mislead Users in Health and Medical Queries

Alex Turner, Technology Editor
4 Min Read
⏱️ 3 min read

**

In an age where technology has revolutionised the way we seek information, experts are sounding the alarm over the risks associated with using AI chatbots for health advice. A recent study has revealed that these digital assistants, including popular platforms such as ChatGPT and Grok, often provide inaccurate or misleading medical information, raising serious concerns about their reliability in sensitive areas like mental health and medical guidance.

The Problem with AI Responses

A comprehensive investigation into the performance of five mainstream AI chatbots posed 50 critical medical questions ranging from the efficacy of vitamin D in cancer prevention to the safety of Covid-19 vaccinations. Alarmingly, the study found that a staggering 50% of the responses were deemed “problematic.” Grok emerged as the worst offender, with 58% of its answers flagged, followed closely by ChatGPT at 52%, and Meta AI at 50%.

Researchers attributed these inaccuracies to a phenomenon known as “hallucination,” where chatbots generate erroneous responses stemming from biased or incomplete training data. They noted that while these models are fine-tuned with human feedback, they often prioritise answers that align with user expectations rather than factual accuracy.

A Closer Look at the Findings

The study, conducted by researchers from the University of Alberta and Loughborough University, highlighted specific areas where chatbots faltered. Questions related to vaccines and cancer saw better performance, yet the inaccuracies soared in topics like stem cell therapies and nutritional advice. This inconsistency raises a crucial question: how can we trust these technologies when they produce authoritative-sounding but potentially flawed responses?

The research revealed that only 32% of citations from ChatGPT and similar chatbots were accurate, with nearly half being at least partially fabricated. This level of misinformation is alarming, particularly in a field where accurate information can be a matter of life and death.

The Need for Oversight and Education

Given the prevalence of AI chatbots in everyday life, the researchers stressed the importance of careful implementation and monitoring. They cautioned that these tools are not licensed to dispense medical advice and may lack access to the latest medical knowledge. The findings underscore the urgent need for public education and professional training to ensure that AI can enhance, rather than undermine, public health initiatives.

As AI technology continues to evolve and proliferate, the research team emphasised the necessity for regulatory oversight to protect users from the pitfalls of misinformation. They asserted, “As the use of AI chatbots continues to expand, our data highlight a need for public education, professional training, and regulatory oversight to ensure that generative AI supports, rather than erodes, public health.”

Why it Matters

As the reliance on digital tools for health information grows, the implications of this study are profound. Misinformation can lead to misguided health decisions, affecting individuals and communities at large. It is imperative for users to remain vigilant and critical of the information they receive from AI chatbots, and for developers to ensure these systems are equipped with the necessary safeguards. In an era where health information is more accessible than ever, ensuring its accuracy is not just a technical challenge—it’s a moral imperative.

Share This Article
Alex Turner has covered the technology industry for over a decade, specializing in artificial intelligence, cybersecurity, and Big Tech regulation. A former software engineer turned journalist, he brings technical depth to his reporting and has broken major stories on data privacy and platform accountability. His work has been cited by parliamentary committees and featured in documentaries on digital rights.
Leave a Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

© 2026 The Update Desk. All rights reserved.
Terms of Service Privacy Policy