Key Takeaways
- FedRAMP’s request for basic data‑flow diagrams from Microsoft GCC High exposed a deeper architectural shortcoming, not merely a paperwork issue.
- The prolonged inability to verify encryption pathways highlights how legacy, multi‑tenant designs impede transparent security validation.
- Third‑party assessor organizations (3PAOs) operate under an inherent conflict of interest because vendors select and pay them, which can soften scrutiny.
- FedRAMP authorization represents a point‑in‑time snapshot, not a continuous assurance; agencies must therefore verify ongoing protection themselves.
- Purpose‑built, single‑tenant cloud solutions naturally generate the evidence reviewers need, making verification straightforward and reliable.
The Core Issue Revealed by the ProPublica Investigation
The ProPublica report uncovered that FedRAMP spent five years trying to obtain a simple data‑flow diagram from Microsoft’s Government Community Cloud (GCC) High service. The request asked only for a technical illustration showing where government data is encrypted as it moves through the system—a diagram that Amazon Web Services and Google Cloud routinely supplied. Microsoft’s refusal to provide the diagram, even after narrowing the scope to a single service (Exchange Online), forced reviewers to rely on a philosophical white paper that lacked concrete proof of operational encryption. This standoff highlighted a fundamental gap between compliance checklist completion and genuine security verification.
Why Documentation Requests Expose Architectural Weaknesses
In cloud environments, the ability to produce accurate data‑flow diagrams is not merely a clerical task; it reflects the underlying architecture’s transparency. When a provider’s system is built on decades of legacy code, data often traverses many inherited components—each a potential “hop” where encryption may lapse. One FedRAMP reviewer likened this to traveling from Washington, D.C., to New York by bus, ferry, and airplane instead of taking a direct train: every transfer point introduces uncertainty about whether the data remains protected. Consequently, the failure to supply a diagram signaled an architecture that obscures encryption boundaries, making it impossible for auditors to confirm that protection is consistently applied.
The Thales Data Threat Report Context
The 2026 Thales Data Threat Report found that only about one‑third of organizations possess complete knowledge of where their data resides within their cloud environments. When a cloud provider cannot map its own encryption pathways, customers inherit that blind spot. The GCC High case exemplifies this statistic: despite holding a FedRAMP authorization, the service left agencies without verifiable assurance that their most sensitive information remained encrypted throughout its journey. The disconnect between reported compliance and actual visibility underscores why reliance on authorization alone can be misleading.
Assessor Incentives and the 3PAO Model
Third‑party assessor organizations (Coalfire and Kratos in this instance) were engaged by Microsoft to independently evaluate GCC High. Both assessors privately communicated to FedRAMP that they could not obtain a full picture of the encryption flow, yet their official reports did not reflect these limitations. FedRAMP even placed Kratos on a corrective‑action plan for not pushing back strongly enough. This situation reveals a structural conflict: the vendor selects and pays the assessor, creating a financial incentive to avoid findings that could jeopardize the contract. While not every assessment is compromised, the model’s design tends to favor outcomes that please the paying client unless robust oversight counterbalances the pressure.
Empirical Evidence from the Black Kite Report
The 2026 Black Kite Third‑Party Breach Report surveyed roughly 200,000 monitored organizations and discovered that more than half exhibited at least one critical vulnerability while maintaining an average cyber grade of “A.” High grades and genuine risk can coexist when assessments rely on documentation that does not verify real‑time controls. The GCC High episode explains precisely how such a mismatch occurs: a provider can satisfy checklist items (e.g., submitting a white paper) without demonstrating that the claimed controls are actually operative across all data pathways.
FedRAMP’s Limited Scope and Agency Responsibility
FedRAMP was never intended to serve as a continuous security guarantee. The program evaluates a point‑in‑time snapshot based on documentation submitted during the assessment window; it does not perform ongoing, real‑time validation of how a provider handles data. The General Services Administration (GSA) has explicitly stated that FedRAMP’s role is “not to determine if a cloud service is secure enough.” Consequently, agencies that rely solely on FedRAMP authorization must recognize that the endorsement reflects compliance with a set of predefined controls at a specific moment, not an assurance of enduring protection.
What Agencies Must Do: Own the Verification
Given FedRAMP’s narrow mandate, the responsibility for verifying that cloud services adequately protect government data falls to chief information security officers (CISOs) and authorizing officials. Organizations cannot delegate this duty to the FedRAMP process or to third‑party assessors. Instead, they must implement their own verification mechanisms—such as requesting detailed data‑flow diagrams, conducting independent penetration tests, or employing continuous‑monitoring tools—to confirm that encryption and other safeguards remain effective throughout the service lifecycle.
Practical Steps for Effective Verification
When assessing a cloud provider, agencies should ask for concrete evidence rather than philosophical assurances. A suitable request is: “Show me, not tell me, exactly where my data is encrypted as it moves through your system.” An acceptable answer is a detailed data‑flow diagram that marks every encryption and decryption point, along with supporting logs or audit trails that prove those controls operate in real time. If the provider responds only with a high‑level white paper about encryption philosophy, the agency should treat that as a finding requiring further investigation or remediation before granting or maintaining authority to operate.
Why Purpose‑Built, Single‑Tenanted Architectures Help
Platforms designed from the ground up for sensitive‑data handling—such as Kiteworks’ Secure Gov Cloud—inherently facilitate verification. Single‑tenant architectures isolate each customer’s environment, making data flows observable and traceable without the complexity introduced by multi‑tenant sharing. Moreover, when audit trails are complete, unthrottled, and available as a baseline feature rather than a premium add‑on, reviewers can readily confirm that encryption is applied consistently. In contrast, adapting a general‑purpose productivity suite (like GCC High) to meet government security requirements often forces retrofits that obscure data pathways, turning verification into a prolonged, uncertain endeavor.
Looking Forward: A Call for Transparency
The five‑year struggle to obtain a simple diagram from Microsoft GCC High serves as a cautionary tale: compliance without transparency can create a false sense of security. Agencies must evolve beyond reliance on periodic authorizations and adopt active, continuous verification practices. By demanding clear, architectural evidence and favoring providers whose designs naturally yield that evidence, government organizations can close the gap between compliance posture and actual risk exposure, ensuring that the nation’s most sensitive data remains protected wherever it resides.

