Key Takeaways
- AI‑enabled threats are growing: Bad actors are increasingly using artificial intelligence to automate attacks, evade detection, and personalize social engineering schemes.
- Proactive risk monitoring is essential: Organizations must continuously assess emerging AI‑related vulnerabilities, including model poisoning, data manipulation, and deep‑fake fraud.
- Responsible AI adoption mitigates risk: Implementing governance frameworks, rigorous testing, and transparent AI lifecycles helps organizations harness AI’s benefits while limiting exposure.
- Cross‑functional collaboration improves defense: Cybersecurity teams, IT leaders, data scientists, and executives need to work together to align security controls with AI initiatives.
- Practical guidance from industry experts: The upcoming AHA webinar will offer concrete steps, real‑world examples, and tools for building resilient AI‑enabled environments.
Webinar Overview and Objectives
On May 5 at 1 p.m. ET, John Riggi, the American Hospital Association’s national advisor for cybersecurity and risk, will moderate a live webinar titled “AI‑Driven Threats: Emerging Risks and Best Practices for Responsible Adoption.” The session is designed to help healthcare leaders, IT professionals, and risk managers understand how malicious actors are weaponizing artificial intelligence and what steps organizations can take to stay ahead of these evolving dangers. Attendees will leave with actionable insights on threat detection, risk assessment, and governance strategies that support secure AI integration.
Speaker Profiles and Expertise
John Riggi brings decades of experience in federal law enforcement, cybersecurity policy, and healthcare risk management. As the AHA’s cybersecurity advisor, he regularly advises hospitals and health systems on threat intelligence sharing, incident response, and regulatory compliance. Co‑presenter Scott Trevino, senior vice president of cybersecurity at TRIMEDX, oversees the protection of medical device ecosystems and enterprise IT infrastructure for a broad portfolio of healthcare clients. His background spans threat hunting, zero‑trust architecture, and securing connected medical technologies, making him well‑suited to discuss the intersection of AI and operational technology risks.
Why AI Is a Double‑Edged Sword for Healthcare
Artificial intelligence offers transformative potential for clinical decision‑support, operational efficiency, and patient engagement. However, the same capabilities that enable predictive analytics and automation can be subverted by adversaries. AI models can be trained to generate convincing phishing emails, create deep‑fake videos that impersonate executives, or automate the discovery of software vulnerabilities at scale. In healthcare, where data sensitivity and system availability are critical, these tactics pose heightened risks to patient safety, privacy, and regulatory compliance.
Emerging AI‑Related Threat Vectors to Watch
The webinar will highlight several emerging threat vectors that security teams should monitor closely. First, model poisoning occurs when attackers inject malicious data into training sets, causing AI systems to make erroneous or harmful predictions. Second, adversarial examples—subtle perturbations to input data—can evade detection by AI‑based security tools. Third, generative AI enables rapid creation of convincing social‑engineering lures, increasing the success rate of credential‑theft campaigns. Finally, AI‑driven automation allows threat actors to scale ransomware deployment, exploit zero‑day flaws, and conduct persistent reconnaissance with minimal human oversight.
Best Practices for Responsible AI Adoption
To counter these threats, the presenters will outline a framework for responsible AI adoption grounded in three pillars: governance, validation, and monitoring. Governance involves establishing clear policies, roles, and accountability for AI development, deployment, and retirement, often aligned with existing frameworks such as NIST AI RMF or ISO/IEC 42001. Validation stresses rigorous testing—including bias assessments, robustness checks, and adversarial simulations—before models move into production. Continuous monitoring calls for real‑time telemetry, anomaly detection, and feedback loops that trigger retraining or model rollback when performance degrades or suspicious behavior is detected.
Integrating AI Security into Existing Cybersecurity Programs
Riggi and Trevino will emphasize that AI security should not be treated as an isolated initiative but woven into broader cybersecurity programs. This integration includes updating risk assessments to account for AI assets, extending incident response playbooks to cover AI‑specific scenarios, and ensuring that security awareness training addresses AI‑generated threats such as deep‑fake phishing. Additionally, organizations should leverage threat intelligence feeds that focus on AI misuse and participate in information‑sharing forums like the Health‑ISAC to stay informed about emerging tactics.
Practical Tools and Resources for Immediate Action
The webinar will showcase a toolkit of resources that attendees can implement right away. These include checklist templates for AI procurement, sample contract clauses that mandate security and transparency requirements from vendors, and open‑source tools for model integrity verification (e.g., TensorFlow Privacy, Adversarial Robustness Toolbox). Trevino will also share TRIMEDX’s internal playbook for securing AI‑enabled medical device platforms, illustrating how segmentation, least‑privilege access, and continuous firmware validation can reduce attack surfaces.
Case Studies: Lessons Learned from Real‑World Incidents
To ground the discussion in reality, the presenters will review anonymized case studies where AI‑related failures led to security breaches or operational disruptions. One example involves a hospital’s predictive analytics model that was compromised via poisoned training data, resulting in incorrect risk scores that delayed critical care interventions. Another case details a deep‑fake video used to impersonate a CFO, prompting an unauthorized wire transfer. By dissecting these incidents, Riggi and Trevino will highlight early warning signs, missed opportunities for detection, and the remedial steps that ultimately restored trust and compliance.
Looking Ahead: The Future of AI in Healthcare Security
The session will conclude with a forward‑looking perspective on how AI will continue to shape both threat landscapes and defensive capabilities. Advances in explainable AI, federated learning, and secure multi‑party computation promise to improve model transparency while protecting data privacy. Simultaneously, attackers are likely to harness quantum‑enhanced algorithms and generative adversarial networks to craft more sophisticated attacks. The speakers will urge participants to treat AI security as an ongoing, iterative process—regularly revisiting policies, testing defenses, and fostering a culture of vigilance as technology evolves.
Call to Action: Register and Prepare
Healthcare leaders interested in strengthening their organization’s resilience against AI‑driven threats are encouraged to register for the May 5 webinar now. Early registration ensures access to the live Q&A session, downloadable slides, and a post‑event recording for later review. By attending, participants will gain the knowledge and tools needed to harness AI’s benefits responsibly while safeguarding patients, data, and critical infrastructure against the next generation of cyber threats.

