VA Sounds Alarm on Unregulated AI Use in Healthcare

Key Takeaways

  • The Department of Veterans Affairs has issued an urgent advisory regarding the use of two AI chat tools, citing potential patient safety risks.
  • The tools, VA GPT and Microsoft 365 Copilot Chat, are used by healthcare providers to analyze medical information and update patient records.
  • The advisory warns of potential errors, misinformation, and bias in the systems, which could lead to inaccurate diagnoses and treatment decisions.
  • The VA’s Inspector General has recommended a review of the systems and the implementation of safeguards to ensure patient safety.

Introduction to the Advisory

A watchdog inside the Department of Veterans Affairs has issued an urgent advisory regarding the use of two AI chat tools, citing potential patient safety risks. The advisory, released as a Preliminary Result Advisory Memorandum (PRAM), warns of potential errors, misinformation, and bias in the systems, which could lead to inaccurate diagnoses and treatment decisions. As Dr. Matthew Miller, the former executive director for VA Suicide Prevention, stated, "It’s an official communication from [the Office of the Inspector General] recommending a ‘stopping of the presses’ of the AI tools under review based upon perceived patient safety concerns." The VA’s Inspector General has found that the systems, VA GPT and Microsoft 365 Copilot Chat, can be vulnerable to producing misinformation, and that they were implemented without review by the VA’s own patient safety experts.

The Risks of AI Chat Tools

The advisory warns that the AI chat tools can introduce errors or omit correct data in somewhere between 1% and 3% of cases using patient medical data. As the researchers from TORTUS AI and Guy’s and St Thomas NHS Trust noted, "Errors in clinical documentation generation can lead to inaccurate recording and communication of facts. Inaccuracies in the document summarisation task can introduce misleading details into transcribed conversations or summaries, potentially delaying diagnoses and causing unnecessary patient anxiety." This can lead to inaccurate diagnoses and treatment decisions, which can have serious consequences for patients. The Inspector General’s office has stated that the VA did not work with the agency’s National Center for Patient Safety before fielding these AI chat tools, nor does it have a formal mechanism in place to identify or fix risks that come with using generative AI tools.

The Benefits of AI Chat Tools

Despite the risks, the AI chat tools have also been found to have potential benefits. Researchers have noted that AI can help clinicians who spend a great amount of their time on paperwork, which raises the "cognitive load" on doctors and can "lead to burnout." VA GPT, which was released by the agency as a generative AI pilot tool, had nearly 100,000 users in September 2025 and was estimated to save them two to three hours per week. As Dr. David Shulkin, the former VA Secretary, stated, "I actually think that it is important that veterans know that VA has the best and the most capable technologies for them to get their care, so I don’t think that you want this to be a reason why VA steers away from artificial intelligence." However, Shulkin also noted that the report was an "appropriate" warning to ensure that the technologies are being used appropriately, and that patient safety is maintained as the highest priority.

Response to the Advisory

The Department of Veterans Affairs has responded to the advisory, with a spokesperson stating that "VA clinicians only use AI as a support tool, and decisions about patient care are always made by the appropriate VA staff." James McCormick, executive director of government affairs for Vietnam Veterans of America, stated that their group has not received complaints or problems regarding AI resources in the VA from their members. However, McCormick also stated that if issues have come up that could put members at risk, they "certainly support a deeper dive into these tools for corrective action and improvements to ensure only the best quality of care and services for our veterans." The VA’s Inspector General has recommended a review of the systems and the implementation of safeguards to ensure patient safety, and it is likely that the agency will take steps to address the concerns raised in the advisory.

Conclusion

In conclusion, the Department of Veterans Affairs has issued an urgent advisory regarding the use of two AI chat tools, citing potential patient safety risks. While the tools have potential benefits, such as reducing paperwork and saving clinicians time, they also pose risks, such as introducing errors and omitting correct data. The VA’s Inspector General has recommended a review of the systems and the implementation of safeguards to ensure patient safety, and it is likely that the agency will take steps to address the concerns raised in the advisory. As the researchers from TORTUS AI and Guy’s and St Thomas NHS Trust noted, "Not having a process precludes a feedback loop and a means to detect patterns that could improve the safety and quality of AI chat tools used in clinical settings." It is essential that the VA takes steps to address these concerns and ensure that patient safety is maintained as the highest priority.

https://taskandpurpose.com/military-life/va-inspector-general-ai-medical-use/

Click Spread

Leave a Reply

Your email address will not be published. Required fields are marked *