Key Takeaways:
- Global cybersecurity agencies have issued unified guidance on applying artificial intelligence (AI) within critical infrastructure, marking a shift from theoretical debate to practical guardrails for safety and reliability.
- The guidance emphasizes the importance of distinguishing between safety and security, and urges operators to adopt push-based architectures with strong architectural boundaries and human-in-the-loop oversight.
- AI should be used as an adviser rather than a controller, and operators should demand transparency from vendors embedding AI into industrial systems.
- The guidance recommends that critical infrastructure owners develop strong procurement strategies that take AI into account, and that humans are responsible for functional safety.
- Operators should review where AI already touches their OT landscape, establish or refresh validation procedures, and begin early conversations with vendors about transparency requirements.
Introduction to AI in Critical Infrastructure
The release of joint guidance on Principles for the Secure Integration of Artificial Intelligence in Operational Technology marks a significant milestone for critical infrastructure security. Major global cybersecurity agencies, including CISA, the FBI, the NSA, and the Australian Signals Directorate’s Australian Cyber Security Centre, have aligned on a shared direction for the use of AI in operational environments. This document moves the conversation from theory to practice, acknowledging AI’s promise while also highlighting the significant risks that operators must actively manage to ensure reliability.
Distinguishing Between Safety and Security
A central contribution of this guidance is its clear distinction between safety and security in the AI era. Protecting the integrity and availability of systems is not the same as preventing physical harm, and AI complicates this relationship in ways that many CISOs are now expected to navigate. The guidance recognizes that AI’s non-deterministic nature can lead to unpredictable behaviors or hallucinations, which is why it draws an explicit line: "AI such as LLMs almost certainly should not be used to make safety decisions for OT environments." This is not a rejection of innovation, but rather a pragmatic call to preserve the safety foundations that operational technology depends on.
Architecture Recommendations
The architecture recommendations extend the safety-first mindset, mapping where AI belongs within the OT hierarchy with clarity. Predictive Machine Learning can strengthen operations at levels 0 through 3, such as forecasting pump failures based on vibration patterns or identifying anomalies in turbine exhaust temperatures. Meanwhile, large language models are better suited for business functions at levels 4 and 5, where they assist with documentation, work order generation, or regulatory reporting. The guidance also cautions against introducing new attack vectors, recommending "push-based or brokered architectures that move required features or summaries out of OT without granting persistent inbound access."
Human Factors and Procurement Strategies
The guidance looks beyond systems to the humans who operate them, warning that "heavy reliance on AI may cause OT personnel to lose manual skills needed for managing systems during AI failures or system outages." For critical infrastructure, this is not theoretical, as many power plant and water utility operators are already experiencing a loss of skilled workers as employees retire. The guidance encourages organizations to train operators not only on how to use AI, but also on how to challenge it. Additionally, the guidance recommends that critical infrastructure owners develop strong procurement strategies that take AI into account, demanding transparency and security considerations from OT vendors regarding how AI technologies are embedded into their products.
Accountability and Next Steps
The document reaffirms that accountability sits with people, reminding us that "ultimately, humans are responsible for functional safety." The recommended "human in the loop" model ensures that AI informs decisions but does not replace human judgment. This approach mitigates challenges such as "model drift" and avoids the risk of blindly executing "black box" outputs in environments where the stakes include real human safety. As we move forward, the path is both challenging and hopeful, with the shared global guidance giving operators a clearer map and reinforcing that resilience grows when humans and machines work in partnership. A practical next step is to review where AI already touches your OT landscape, establish or refresh validation procedures, and begin early conversations with vendors about transparency requirements.