**
In Southern California, where homelessness rates are alarmingly high, a private company named Akido Labs is offering clinics specifically for individuals experiencing homelessness and those with limited incomes. However, a concerning aspect of their approach is that patients are primarily seen by medical assistants who utilise artificial intelligence (AI) to analyse conversations and suggest possible diagnoses and treatment plans. These recommendations are later reviewed by a physician. Akido Labs’ chief technology officer has stated that the aim is to “pull the doctor out of the visit,” raising significant ethical and practical questions about the implications of such a model.
The Growing Role of AI in Healthcare
The introduction of AI in medical settings is part of a broader trend that has gained momentum across the United States. According to a 2025 survey conducted by the American Medical Association, approximately two-thirds of physicians are incorporating AI into their daily practices, particularly for diagnosis. A notable AI startup recently secured $200 million to develop an application dubbed “ChatGPT for doctors,” and legislation is under consideration that could allow AI to prescribe medication. While these developments may promise efficiency, they risk exacerbating existing disparities in healthcare access and treatment for low-income populations.
The reality is stark: individuals who are unhoused or financially disadvantaged are already navigating a healthcare landscape fraught with obstacles, including inadequate resources and systemic bias. Subjecting these vulnerable groups to AI-based healthcare solutions without their input or consent raises serious ethical concerns.
The Dangers of AI-Driven Diagnostics
Critics highlight that relying on AI for diagnostics can lead to significant inaccuracies, particularly among marginalised groups. Research published in *Nature Medicine* in 2021 revealed that AI algorithms used for chest X-ray analysis systematically underdiagnosed Black and Latinx patients, as well as those identified as female and those reliant on Medicaid. Such biases not only threaten patient safety but also reinforce existing health inequities by denying proper care to those who are already disadvantaged.
A 2024 study further illustrated the risks of AI misdiagnosis, revealing that Black patients experienced a higher likelihood of false positives in breast cancer screenings compared to their white counterparts. This alarming trend underscores the urgent need for healthcare systems to reassess the role of AI, particularly in the context of communities already facing significant health challenges.
Moreover, there are instances where patients are unaware that AI systems are involved in their healthcare decisions. One medical assistant disclosed that while his patients know an AI tool is “listening,” they are not informed that it plays a role in making diagnostic recommendations. Such practices echo a troubling history of medical exploitation, where informed consent was often disregarded for marginalised groups.
The Legal Ramifications of AI in Healthcare
The ramifications of AI’s integration into healthcare are evident in ongoing legal disputes. In 2023, a group of Medicare Advantage customers filed a lawsuit against UnitedHealthcare in Minnesota, claiming they were wrongly denied coverage due to an AI system, nH Predict, that inaccurately assessed their eligibility. Tragically, some plaintiffs allege that the denials resulted in the deaths of patients who were denied necessary care. Another similar case against Humana in Kentucky suggests a troubling trend where AI mismanagement could have dire consequences for vulnerable populations.
As these cases unfold, they highlight a pressing issue: the reliance on AI for critical healthcare decisions can perpetuate a cycle of neglect towards those who are already economically disadvantaged. While those with financial resources can access quality care, the use of AI in healthcare threatens to further marginalise individuals with low incomes, effectively creating a system of medical classism.
Why it Matters
The use of AI in healthcare should not come at the expense of vulnerable populations. It is vital that individuals who are unhoused or economically disadvantaged receive patient-centred care from healthcare providers who genuinely listen to their needs. The push for AI solutions, driven by profit motives and efficiency, risks sidelining the human element that is essential for effective medical care. As we navigate these technological advancements, we must ensure that patients are empowered in their healthcare decisions, rather than being subjected to untested AI systems that may exacerbate existing health disparities. The stakes are too high to allow technology to dictate the future of healthcare without the voices of those most affected being heard.