A recent investigation into the recommendations made by AI chatbots for cancer treatments has sparked alarm among health professionals. Researchers from the Lundquist Institute for Biomedical Innovation at Harbor-UCLA Medical Center found that these intelligent systems often suggest alternative therapies instead of traditional chemotherapy, potentially endangering patients’ lives.
The Study’s Findings
The research scrutinised popular AI-driven chatbots, including xAI’s Grok, OpenAI’s ChatGPT, Google’s Gemini, Meta’s AI, and High-Flyer’s DeepSeek. Alarmingly, nearly 50% of the responses concerning cancer treatments were deemed “problematic” by expert reviewers, as detailed in a study published in BMJ Open. Of these, 30% were categorised as “somewhat problematic”—accurate but lacking in completeness—while 19.6% fell into the “highly problematic” category, characterised by substantial inaccuracies and significant subjective interpretation required from users.
Nicholas Tiller and his team employed a technique known as “straining,” wherein they posed inquiries designed to elicit responses rooted in misinformation. Their goal was to simulate the experience of a typical user, who might approach these chatbots similarly to a search engine. Tiller noted, “Many individuals are asking precisely these questions. If someone believes that raw milk has health benefits, their search terms will already align with that belief.”
Misleading Alternatives
When queried about alternative treatments that might outperform chemotherapy, the chatbots generally cautioned users against such options, indicating that alternatives could be harmful and often lack scientific validation. However, they simultaneously provided a list of these alternatives, including acupuncture, herbal remedies, and specific dietary strategies, which could mislead patients seeking effective treatments.
Some responses even included references to clinics advocating for alternative therapies while disparaging chemotherapy. Tiller remarked that the chatbots’ tendency to present a “false balance”—equally weighing scientific evidence with non-scientific sources—hinders their ability to deliver definitive, evidence-based responses. This could divert patients from effective, medically-approved treatments, leading them towards unverified alternatives that may jeopardise their health.
The research concluded that the performance of these chatbots in the context of health and medical questions was considerably inadequate. They stressed the urgent need for public education and oversight to mitigate the potential spread of misinformation.
Growing Reliance on AI for Medical Advice
The implications of this study are particularly concerning given that approximately one in four adults in the United States now utilise AI tools for health-related inquiries, according to a recent Gallup poll. Many individuals turn to these technologies for quick information, often opting for them over traditional consultations due to rising healthcare costs and accessibility issues. However, only a third of users expressed confidence in the accuracy of the AI-generated advice, highlighting a pervasive scepticism among the majority.
Dr. Michael Foote, an assistant attending professor at Memorial Sloan Kettering Cancer Center, who did not participate in the study, emphasised the risks associated with the proliferation of misinformation surrounding alternative treatments. He explained, “Some of these unverified treatments can cause real harm. They are not evaluated by the FDA and may lead to liver damage or metabolic issues, especially when patients forgo conventional therapies in their favour.”
Dr. Foote further articulated the emotional toll that erroneous chatbot responses can impose on patients. He recounted instances where patients, distressed and anxious, reported being told by AI chatbots that they had only months to live—a claim he dismissed as utterly unfounded.
Why it Matters
The findings of this study underscore a critical intersection between technology and public health. As reliance on AI chatbots for medical guidance grows, the potential for misinformation to influence treatment decisions poses a serious threat to patient safety. While these tools offer convenience and speed, the importance of maintaining rigorous standards for accuracy cannot be overstated. As society navigates this new digital landscape, it is imperative that users are equipped with the knowledge to discern reliable information from dubious sources, ensuring that their health decisions are informed by evidence rather than conjecture.