Key Takeaways
- AI introduces two distinct cybersecurity challenges: securing AI systems themselves and defending against AI‑enabled attacks on legacy infrastructure.
- U.S. policy to date has focused mainly on the former, leaving a growing gap in preparedness for offensive AI use.
- Frontier models such as Anthropic’s Claude Mythos Preview can already discover and exploit zero‑day flaws, signalling a rapid escalation in threat capability.
- Defensive AI initiatives (e.g., Project Glasswing) show promise, but broader adoption across critical‑infrastructure sectors requires federal coordination, shared threat intelligence, and sustained funding.
- A one‑time $500 million matching credit program for AI‑driven vulnerability detection in critical systems would provide a cost‑effective lever to offset the tens of billions lost annually to cyberattacks.
- Institutionalizing joint AI security‑testing environments, threat‑intelligence pipelines, and continuously updated standards (led by NIST and CISA) is essential to operationalize the White House National AI Policy Framework and strengthen national resilience.
Overview of AI‑Related Cybersecurity Risks
Artificial intelligence creates two parallel streams of cybersecurity risk. First, there are risks inherent to the AI systems themselves—vulnerabilities in model training, data poisoning, adversarial examples, and model theft that could compromise the integrity or confidentiality of AI‑driven decisions. Second, AI can be weaponized as a tool to discover and exploit weaknesses in non‑AI systems, ranging from legacy software to modern cloud services. While the U.S. government has invested heavily in securing the first category, comparatively little attention has been paid to preparing defenses against the second, even as the capability of offensive AI advances at an unprecedented pace.
Federal Efforts to Secure AI Systems
Over the past two administrations, securing AI has been a policy priority. Under President Biden, the National Institute of Standards and Technology (NIST) launched the U.S. AI Safety Institute to foster safe and secure AI development. Later, under President Trump, NIST rebranded the entity as the Center for AI Standards and Innovation (CAISI) and shifted its focus toward securing commercial AI implementations. Complementing these moves, the Cybersecurity and Infrastructure Security Agency (CISA) and the National Security Agency (NSA) released a joint report late last year outlining principles for securely integrating AI into government operations. The guidance emphasized secure‑by‑design, continuous monitoring, anomaly detection, human‑in‑the‑loop oversight, and rigorous testing and red‑teaming of AI models. These initiatives collectively aim to harden AI assets against internal threats and misuse.
The Growing Threat of Offensive AI
Despite progress on securing AI, the federal response to offensive AI use remains limited. Policymakers have acknowledged the issue—for instance, the Bipartisan Senate AI Working Group urged legislation to bolster AI‑enhanced U.S. cyber capabilities, and the Bipartisan House Task Force on AI stressed that security teams must employ AI defensively to improve resiliency. However, concrete actions have lagged. The situation shifted dramatically earlier this month when Anthropic unveiled Claude Mythos Preview, a frontier AI model capable of identifying and exploiting zero‑day vulnerabilities across every major operating system and web browser. In a striking demonstration, the model uncovered a 27‑year‑old flaw in OpenBSD that allowed remote machine crashes with a simple network connection. This result underscores that advanced AI will increasingly automate the discovery of exploitable weaknesses, dramatically lowering the barrier for sophisticated attacks.
Industry Response and the Need for Federal Leadership
Recognizing the imminent danger, leading cybersecurity researchers issued a joint warning that malicious actors will increasingly harness AI to find and exploit vulnerabilities, threatening critical infrastructure, data integrity, and service continuity. In response, Anthropic announced Project Glasswing, a cross‑industry initiative that leverages frontier AI to strengthen cyber defenses for operators of the nation’s most vital systems. While promising, such efforts remain fragmented. To ensure that AI‑enabled defenses outpace emerging threats, the federal government must step in, scale these capabilities, and mandate adoption across critical‑infrastructure sectors—including energy, water, transportation, and healthcare—where a successful breach could cascade into national‑scale disruption.
Operationalizing the White House National AI Policy Framework
The March 2026 White House National AI Policy Framework called on Congress to equip federal agencies handling national security with the technical capacity to understand and mitigate risks from frontier AI models, including collaboration with leading AI labs. Turning this guidance into practice requires three coordinated steps. First, establish joint AI security‑testing environments where agencies and frontier labs evaluate models against realistic cyber‑attack scenarios, enabling early detection of model‑driven exploit techniques. Second, create shared AI threat‑intelligence pipelines that deliver timely insights into emerging AI‑enabled attack vectors to all stakeholders. Third, maintain continuously updated AI security frameworks authored by authoritative bodies such as NIST and CISA, ensuring consistent baselines and best practices across government and industry. These mechanisms would institutionalize a proactive defense posture rather than reacting after breaches occur.
Funding the AI‑Driven Defense Initiative
Effective implementation will likely demand additional resources. As Congress debates funding for the Department of Homeland Security, it should allocate a one‑time $500 million matching credit program designed to finance the deployment of frontier AI models for vulnerability detection and mitigation in critical or widely used systems. Considering that cyberattacks cost the United States tens of billions of dollars annually, such an investment represents a preventive measure with a high return on investment. The matching‑credit structure encourages private‑sector cost‑sharing, accelerates adoption, and leverages existing expertise while ensuring federal oversight.
Broader Benefits of AI‑Enhanced Cyber Defense
Beyond threat detection, AI offers substantial defensive opportunities: early‑warning systems for anomalous behavior in industrial‑control networks, automated identification of ransomware precursors, and predictive analysis of cascading failure risks across interconnected infrastructures. These capabilities are especially valuable for state and local governments, which often lack the budget and expertise to counter sophisticated AI‑enabled attacks. By weaving AI into the cybersecurity fabric of critical infrastructure, the nation can reduce the current asymmetry that favors attackers, improve resilience, and ensure continuity of essential services even under sustained cyber pressure.
Conclusion: A Coordinated National Response
The rise of AI‑enabled cyber threats necessitates a unified national strategy that balances securing AI systems themselves with defending against AI‑powered assaults on legacy assets. While federal initiatives have made strides in the former domain, the latter remains under‑addressed despite clear evidence of rapidly advancing offensive capabilities. By fostering joint testing environments, sharing threat intelligence, updating security standards, and providing targeted funding, the United States can harness AI as a force multiplier for defenders. When implemented effectively, AI will enable federal, state, and local entities to stay ahead of evolving threats, strengthening national cybersecurity in an era where attacks are increasingly accelerated by artificial intelligence.

