Archives Information Security Office Tackles AI and CUI Security Challenges

0
7

Key Takeaways

  • Controlled Unclassified Information (CUI) encompasses over 125 categories of sensitive but unclassified data that require consistent protection across federal agencies.
  • Historically, agencies have applied CUI sharing and protection rules inconsistently, creating compliance uncertainty and unnecessary constraints on information flow.
  • The rise of artificial intelligence, especially large language models, has intensified concerns about the “mosaic effect,” where seemingly innocuous unclassified data can be aggregated to reveal valuable intelligence to adversaries.
  • Michael Thomas, Director of the Information Security Oversight Office (ISOO), acknowledges both the risks and the transformative potential of AI for improving CUI tagging, identification, and sharing precision.
  • ISOO issued new guidance in March 2026 on the responsible use of AI with classified and CUI data, aligning with the Trump administration’s AI push while reinforcing existing legal and policy requirements.
  • Successful AI adoption will demand upfront thoughtful design—mapping AI outputs to specific statutes, regulations, and federal policies—to avoid over‑ or under‑protection of information.
  • When properly implemented, AI can reduce friction, increase fidelity to governing rules, and help agencies overcome longstanding CUI management challenges.

Overview of CUI Management Challenges
Federal agencies have long grappled with the administration of Controlled Unclassified Information (CUI), a category established in 2010 to provide a uniform framework for protecting sensitive but unclassified data. The National Archives and Records Administration (NARA) maintains more than 125 distinct CUI categories, ranging from technical weapon‑system details and law‑enforcement sources to agricultural statistics and patent applications. Despite this structured approach, agencies have applied the sharing and protection rules inconsistently. This patchwork implementation has generated confusion about compliance costs, created unnecessary barriers to legitimate information exchange, and, in some instances, permitted indiscriminate disclosure of data that should remain safeguarded.

The Role of ISOO in CUI Oversight
The Information Security Oversight Office (ISOO), housed within NARA, serves as the federal government’s lead policy shop for CUI and classified national security information. ISOO is responsible for issuing guidance, interpreting statutes and regulations, and helping agencies align their practices with overarching federal policy. Director Michael Thomas has been vocal about the shortcomings he observes: many agencies fail to explicitly connect their information‑control decisions to the underlying laws, regulations, or wide‑reaching policies that justify those controls. Instead, they rely on vague intuitions about what “cannot be shared,” which can lead to either over‑restriction or insufficient protection.

AI’s Dual Impact on CUI Handling
Artificial intelligence, particularly large language models, has reshaped how the public perceives access to information, and federal agencies are struggling to keep pace. On one hand, AI amplifies existing risks: the ability to rapidly ingest, analyze, and synthesize vast volumes of unclassified data heightens the mosaic effect, wherein seemingly harmless pieces of information can be combined by adversaries to reveal actionable intelligence. Thomas warned that “security through obscurity” is no longer viable; hostile actors actively target CUI precisely because AI can exploit it at scale.

On the other hand, Thomas sees AI as a potent tool to remediate longstanding CUI deficiencies. By automating the identification and tagging of information according to specific legal and regulatory criteria, AI could reduce the friction and inconsistency that currently plague CUI management. When properly tuned, machine‑learning models can achieve greater precision and fidelity to the governing statutes, ensuring that protections are applied exactly where the law requires them and nowhere else.

New ISOO Guidance on Responsible AI Use
Recognizing both the promise and peril, ISOO released in late March 2026 a directive titled “Responsible Use of Classified National Security Information and Controlled Unclassified Information with Artificial Intelligence.” The guidance reflects the Trump administration’s broader initiative to accelerate AI adoption across government while insisting that any AI system must comply with existing policies governing classified information and CUI. Thomas explained that the document aims to answer agencies’ pressing questions about where to find authoritative instructions on AI use, how to manage their data responsibly, and what safeguards are necessary to prevent misuse.

The guidance outlines several best practices: conducting risk assessments before deploying AI tools, maintaining audit trails of AI‑driven decisions, ensuring that models are trained on data that accurately reflects applicable CUI categories, and implementing continual monitoring to detect drift or unintended disclosures. By anchoring AI deployment to these principles, ISOO hopes to bridge the gap between innovation and compliance.

Use Cases and Front‑End Thinking
Thomas highlighted a variety of potential applications where AI could add value: automatically classifying incoming documents into the correct CUI tier, flagging information that requires special handling before it is shared across agency boundaries, and generating real‑time compliance reports for oversight officials. He stressed, however, that realizing these benefits demands substantial front‑end effort. Agencies must first articulate precisely which statutes, regulations, or policy documents govern each type of information, then encode those requirements into the AI’s decision‑making logic. Without this groundwork, AI systems risk either over‑classifying data—impeding legitimate collaboration—or under‑classifying it—exposing sensitive material to unauthorized disclosure.

When the initial design work is done correctly, the payoff can be significant. Automated tagging reduces the manual labor that currently consumes analyst time, improves consistency across disparate agency systems, and enables faster, more reliable information sharing for missions that depend on timely data. In Thomas’s view, AI thus represents a “double‑edged sword”: the same capabilities that create new risks also offer a pathway to solve the enduring challenges that have hampered the CUI program for more than a decade.

Conclusion: Balancing Innovation with Protection
The federal government’s experience with CUI illustrates a recurring theme: as technology evolves, so must the frameworks that protect sensitive information. AI introduces novel threats through its capacity to aggregate and analyze data at unprecedented speed, yet it also offers unprecedented tools for enforcing the very laws and policies designed to safeguard that information. ISOO’s latest guidance seeks to harness the latter while mitigating the former. Success will hinge on agencies’ willingness to invest the necessary upfront analytic work—mapping AI outputs to concrete legal mandates—so that the technology enhances, rather than undermines, the integrity of the CUI program. If achieved, the marriage of AI and disciplined CUI management could yield a more secure, efficient, and transparent federal information ecosystem.

SignUpSignUp form

LEAVE A REPLY

Please enter your comment!
Please enter your name here