AI Chatbot Misuse: A Growing Health Technology Threat

0
26

Key Takeaways:

  • Artificial intelligence (AI) chatbots in healthcare are a significant concern due to their potential to provide false or misleading information that can harm patients.
  • The use of chatbots can exacerbate existing health disparities and reinforce stereotypes and inequities.
  • ECRI recommends that patients, clinicians, and healthcare personnel educate themselves on the limitations of chatbots and verify information obtained from them with a knowledgeable source.
  • Health systems can promote responsible use of AI tools by establishing AI governance committees, providing clinicians with AI training, and regularly auditing AI tools’ performance.
  • The Top 10 Health Technology Hazards for 2026 include the misuse of AI chatbots, unpreparedness for digital darkness events, substandard and falsified medical products, and cybersecurity risks from legacy medical devices.

Introduction to Health Technology Hazards
The ECRI, an independent, nonpartisan patient safety organization, has released its annual report on the most significant health technology hazards for 2026. The report highlights the risks associated with the use of artificial intelligence (AI) chatbots in healthcare, which have become increasingly popular among clinicians, patients, and healthcare personnel. These chatbots, which rely on large language models (LLMs), can provide human-like and expert-sounding responses to users’ questions, but they are not regulated as medical devices and have not been validated for healthcare purposes. As a result, they can provide false or misleading information that can harm patients.

The Risks of AI Chatbots in Healthcare
The use of AI chatbots in healthcare poses significant risks to patient safety. According to ECRI, these chatbots can suggest incorrect diagnoses, recommend unnecessary testing, promote subpar medical supplies, and even invent body parts in response to medical questions. For example, one chatbot provided dangerous advice when asked about the placement of an electrosurgical return electrode, stating that it was acceptable to place it over the patient’s shoulder blade, which could lead to burns. The risks associated with the use of chatbots are particularly concerning in light of the growing trend of patients seeking medical advice online, with over 40 million people turning to ChatGPT for health information daily.

Exacerbating Health Disparities
The use of AI chatbots in healthcare can also exacerbate existing health disparities. The data used to train these chatbots can reflect biases and stereotypes, leading to responses that reinforce inequities. This can result in unequal access to healthcare and poor health outcomes for marginalized communities. As Dr. Marcus Schabacker, president and CEO of ECRI, noted, "AI models reflect the knowledge and beliefs on which they are trained, biases and all. If healthcare stakeholders are not careful, AI could further entrench the disparities that many have worked for decades to eliminate from health systems."

Recommendations for Safe Use
To mitigate the risks associated with the use of AI chatbots in healthcare, ECRI recommends that patients, clinicians, and healthcare personnel educate themselves on the limitations of these tools and verify information obtained from them with a knowledgeable source. Health systems can also promote responsible use of AI tools by establishing AI governance committees, providing clinicians with AI training, and regularly auditing AI tools’ performance. By taking these steps, healthcare stakeholders can reduce the risks associated with the use of AI chatbots and ensure that patients receive accurate and reliable medical information.

The Top 10 Health Technology Hazards for 2026
The ECRI report identifies the top 10 health technology hazards for 2026, which include the misuse of AI chatbots, unpreparedness for digital darkness events, substandard and falsified medical products, and cybersecurity risks from legacy medical devices. These hazards pose significant risks to patient safety and highlight the need for healthcare stakeholders to take a proactive approach to mitigating them. By prioritizing patient safety and taking steps to address these hazards, healthcare organizations can reduce the risk of adverse events and improve health outcomes.

About ECRI
ECRI is an independent, nonprofit organization that is dedicated to improving the safety, quality, and cost-effectiveness of care across all healthcare settings. With a focus on technology evaluation and safety, ECRI is respected and trusted by healthcare leaders and agencies worldwide. The organization has built its reputation on integrity and disciplined rigor, with an unwavering commitment to independence and strict conflict-of-interest rules. ECRI is the only organization worldwide to conduct independent medical device evaluations, with labs located in North America and Asia Pacific. The organization is designated an Evidence-based Practice Center by the U.S. Agency for Healthcare Research and Quality and is a federally certified Patient Safety Organization (PSO) as designated by the U.S. Department of Health and Human Services.

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here