U.S. Cyber Agency Lacks Access to Advanced AI Hacking Tools

0
3

Key Takeaways

  • Anthropic and OpenAI have released AI models that can detect software vulnerabilities at unprecedented speed and scale.
  • Cybersecurity and Infrastructure Security Agency (CISA) staff report they are barred from accessing these models, hindering their ability to respond to attacks on critical infrastructure.
  • Anthropic’s models are effectively banned across federal agencies after being labeled a supply‑chain risk following a dispute with the Department of War.
  • OpenAI’s comparable tools are available to government customers through its Trusted Cyber Access program, yet CISA has not been granted access.
  • Workforce cuts, leadership vacancies, and bureaucratic caution at CISA are slowing the adoption of advanced AI tools, while adversaries already exploit the same models for digital espionage.

Recent AI Advances Promise Faster Vulnerability Discovery
In the past few weeks, both Anthropic and OpenAI unveiled new artificial‑intelligence systems engineered to locate software bugs with remarkable speed and breadth. Anthropic’s Mythos model, for example, can autonomously scan all major browsers and operating systems, uncovering flaws that would take human analysts days or weeks to find. OpenAI countered with the release of GPT 5.5 and highlighted its Trusted Cyber Access initiative, which promises vetted cybersecurity teams the ability to harness these powerful models for defensive purposes. The announcements signal a shift toward AI‑augmented threat hunting, but the benefits have yet to reach the nation’s primary cyber defense agency.

CISA Employees Voice Frustration Over Access Restrictions
Two current CISA employees told Forbes that, despite the clear utility of these tools, they are prohibited from using them in their daily work. One staffer bluntly said, “We aren’t even allowed to say the name Anthropic right now.” The sentiment reflects a broader frustration: analysts who sit on the front lines of defending critical infrastructure feel handcuffed by policies that keep cutting‑edge AI out of their reach, even as they grapple with mounting workloads and increasingly sophisticated attacks.

Anthropic’s Barrier: Supply‑Chain Risk Label
The restriction on Anthropic stems from a specific incident that led the federal government to label the company a supply‑chain risk. Following a dispute with the Department of War over the potential use of Anthropic’s tools for surveillance, the agency deemed the firm unsuitable for broad federal deployment. Consequently, Anthropic’s models—including the newly launched Mythos—are effectively barred across all government departments, a move intended to mitigate perceived misuse but which also blocks legitimate defensive applications.

Mythos’ Potential Value for Defensive Cyber Operations
Despite the ban, insiders emphasize that Mythos would be a game‑changer for CISA’s mission. The model’s capacity to autonomously identify vulnerabilities across operating systems and browsers could accelerate the agency’s ability to patch critical software before adversaries exploit it. One CISA worker lamented, “I wish we had access because it would really help my program,” underscoring the gap between the tool’s promise and the reality of restricted access.

OpenAI’s Tools Remain Out of Reach for CISA
While OpenAI markets its AI models as available to government agencies, CISA staff confirm they have not been granted access to any of the company’s recent offerings, including GPT 5.5 or the Codex coding assistant that proposes automatic fixes for identified security flaws. OpenAI’s Trusted Cyber Access program, launched shortly after the Mythos announcement, invites vetted cybersecurity teams to use its advanced models for finding and fixing software flaws, yet the agency appears to have been left out of the vetting process or subsequent approvals.

OpenAI’s Outreach Efforts Highlight a Government‑Facing Strategy
OpenAI is actively courting federal customers. Earlier this month the company hosted a workshop in Washington, D.C., showcasing the cyber capabilities of its AI models and encouraging agencies to join its Trusted Access initiative. Forbes could not verify whether CISA representatives attended, but the two CISA sources interviewed said neither they nor their colleagues had been invited or granted access to any of OpenAI’s models discussed at the event. The outreach suggests a deliberate push to embed OpenAI’s technology within federal cybersecurity workflows, even as the agency itself remains excluded.

Leadership Gaps and Workforce Reductions Impede AI Adoption
The Trump administration’s sweeping cuts to CISA’s workforce last year have left analysts stretched thin, increasing the urgency for automation and AI assistance. Compounding the problem, the agency has operated without a permanent director; the recent withdrawal of Trump‑appointed nominee Sean Plankey—amid controversy over a Coast Guard ship‑building contract—has further stalled decision‑making. One CISA worker noted that the lack of clear leadership “is slowing down our adoption of AI tools significantly because no one wants to be the one who approves anything.” This managerial vacuum creates a bureaucratic bottleneck that hinders the procurement and deployment of emerging technologies.

Adversaries Are Already Leveraging the Same AI for Espionage
While CISA struggles to gain access, hostile actors are not deterred. Anthropic disclosed that Chinese hackers had employed its Claude model to generate cyberattacks against as many as 30 targets, including government entities. The ease with which these models can be repurposed for offensive operations underscores the strategic importance of ensuring that defensive agencies possess comparable capabilities. The asymmetry—offensive groups exploiting AI while the nation’s cyber defenses are barred—creates a dangerous gap that could be exploited in future campaigns.

Conclusion: Bridging the Divide Between AI Innovation and Federal Defense
The current situation illustrates a paradox: cutting‑edge AI models designed to uncover software vulnerabilities are readily available to private sector partners and even some foreign adversaries, yet the United States’ principal cyber defense agency remains largely shut out. Addressing this mismatch will likely require a combination of clearer policy guidance, streamlined approval processes, and stable leadership at CISA. Until those hurdles are overcome, the agency will continue to rely on older, manual methods while the threat landscape evolves at AI‑driven speed. Closing the access gap is not merely a matter of convenience; it is essential to maintaining the nation’s resilience against increasingly sophisticated cyber threats.

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here