Key Takeaways:
- The Department of Veterans Affairs (VA) uses artificial intelligence (AI) chatbots to help doctors document patient visits and make clinical decisions, but lacks a formal system to track potential patient safety risks.
- The VA’s Office of Inspector General (OIG) has identified a "potential patient safety risk" in the deployment of generative AI chat tools in clinical settings.
- The OIG found that the VA does not have a formal mechanism to identify, track, or resolve risks associated with generative AI, and that there is no feedback loop to detect patterns related to patient safety or improve the quality of AI-assisted clinical care.
- The VA’s use of AI chatbots is part of a broader expansion of AI applications, with 229 AI use cases in operation as of 2024, and plans for further expansion in areas such as clinical documentation and customer support.
Introduction to AI Chatbots in the VA
The Department of Veterans Affairs (VA) has been using artificial intelligence (AI) chatbots to help doctors document patient visits and make clinical decisions. However, according to a report released by the VA’s Office of Inspector General (OIG), there is a lack of formal oversight to ensure that these tools do not put patients at risk. The OIG report states that "VHA does not have a formal mechanism to identify, track or resolve risks associated with generative AI," which means that there is no feedback loop to detect patterns related to patient safety or improve the quality of AI-assisted clinical care.
How VA Doctors Use AI Chatbots
Clinicians at VA medical centers provide AI chatbots with clinical information and prompts, and the systems generate text based on that input, which doctors can then copy into electronic health records. The AI chatbots are designed to reduce documentation burden and support medical decision-making. The VA has two authorized AI systems: VA GPT, an internal tool developed by the department, and Microsoft 365 Copilot Chat, a commercial product available to all VA employees. According to the VA’s compliance plan, VA GPT has approximately 100,000 users and is estimated to save each user between two and three hours per week.
The Oversight Gap
The OIG’s review revealed that the VA’s AI efforts for healthcare operate through an informal collaboration between the acting director of the VA’s National AI Institute and the chief AI officer within the VA’s Office of Information and Technology. However, these officials did not coordinate with the National Center for Patient Safety when authorizing AI chat tools for clinical use, which breaks from the VA’s Directive 1050.01. The directive establishes that the Office of Quality Management and the National Center for Patient Safety must "establish and provide operational oversight of VHA quality programs and VHA patient safety programs." As the OIG report notes, "The lack of coordination with the National Center for Patient Safety is a significant concern, as it may lead to a lack of oversight and accountability in the use of AI chatbots in clinical settings."
Why AI Errors Matter in Healthcare
Generative AI systems can produce inaccurate outputs, which can have serious consequences in healthcare. Research published in npj Digital Medicine found that AI-generated medical summaries can omit relevant data or generate false information, errors that could affect diagnoses and treatment decisions. When a doctor uses an AI chatbot to summarize a patient’s medical history or suggest treatment options, any inaccuracy becomes part of the patient’s care. The OIG report emphasizes this concern, stating that "The OIG is concerned about VHA’s ability to promote and safeguard patient safety without a standardized process for managing AI-related risks."
VA’s Broader AI Expansion
The oversight gap comes as the VA rapidly expands its use of artificial intelligence. According to a July 2025 Government Accountability Office report, the VA listed 229 AI use cases in operation as of 2024, up from prior years. The VA’s September 2025 AI strategy document outlines ambitious plans for AI-assisted clinical documentation, surveillance for health status changes, automated eligibility determination for benefits programs, and AI-enhanced customer support systems. The strategy emphasizes that the VA is building infrastructure to support "fast, responsible adoption of common AI tooling." As the VA press secretary, Pete Kasperowicz, noted, "VA clinicians only use AI as a support tool, and decisions about patient care are always made by the appropriate VA staff."
What Comes Next
The OIG’s review remains ongoing, and the office plans to continue engaging with VA leaders and will include a comprehensive analysis of this finding, along with any additional findings, in a final report. The inspector general’s decision to release preliminary findings before completing its full review signals the urgency of the concern. As the report states, "Given the critical nature of the issue, the OIG is broadly sharing this preliminary finding so that VHA leaders are aware of this risk to patient safety." The VA’s challenges mirror those facing federal agencies across government, with the July 2025 GAO report finding that generative AI use cases across 11 federal agencies increased ninefold between 2023 and 2024.
The Wider Context
The implications of the VA’s challenges with AI oversight are significant, not just for the VA but for the broader healthcare industry. As the use of AI chatbots and other AI applications becomes more widespread, it is essential to ensure that these tools are used safely and effectively. The VA’s experience highlights the need for formal oversight and accountability in the use of AI in healthcare, and the importance of prioritizing patient safety in the development and deployment of AI applications. As the OIG report notes, "The OIG is concerned about VHA’s ability to promote and safeguard patient safety without a standardized process for managing AI-related risks." By addressing these concerns and developing effective oversight mechanisms, the VA and other healthcare organizations can ensure that AI is used to improve patient care, rather than putting it at risk.
https://www.military.com/benefits/veterans-health-care/vas-ai-tools-lack-patient-safety-oversight-watchdog-warns.html


