Key Takeaways:
- The head of the Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, uploaded sensitive contracting materials to a public version of ChatGPT, triggering an internal review.
- The documents were marked "for official use only" and were not classified, but their upload set off automated alerts designed to prevent disclosure of government materials.
- Gottumukkala had obtained special permission to use ChatGPT, which most DHS employees were prohibited from accessing, and had been involved in several controversies during his brief tenure as CISA director.
- The incident highlights the risks of using AI platforms with sensitive government information and the need for robust security measures to prevent data breaches.
- The use of AI is becoming increasingly common in the workplace, with 12% of adults reporting daily use of AI at their job, according to a new Gallup poll.
Introduction to the Incident
The Independent is committed to providing high-quality journalism, and your support helps us to tell the story. Whether it’s investigating the financials of Elon Musk’s pro-Trump PAC or producing documentaries on reproductive rights, we know how important it is to parse out the facts from the messaging. Recently, a new report has emerged about the head of the Cybersecurity and Infrastructure Security Agency (CISA), Madhu Gottumukkala, who uploaded sensitive contracting materials to a public version of ChatGPT, triggering an internal review. This incident highlights the risks of using AI platforms with sensitive government information and the need for robust security measures to prevent data breaches.
The Incident and Its Aftermath
According to the report, Gottumukkala shared the files with the AI platform last summer, and the documents were marked "for official use only" — indicating they were sensitive and not for public release. The upload set off automated alerts designed to prevent disclosure of government materials, and top DHS officials conducted an internal review to determine whether any government infrastructure had been harmed by the upload. The review’s outcome remains unknown. Gottumukkala had obtained special permission to use ChatGPT, which most DHS employees were prohibited from accessing, and had been involved in several controversies during his brief tenure as CISA director.
Response from CISA and OpenAI
Marci McCarthy, CISA’s director of public affairs, appeared to dismiss the incident in an emailed statement, noting that Gottumukkala "was granted permission to use ChatGPT with DHS controls in place," and described the use as "short-term and limited." She also noted that the agency is committed to enhancing America’s dominance in AI, as outlined by a January 2025 executive order from Trump. Representatives for the CISA and OpenAI did not immediately respond to requests for comment from The Independent. The uploaded material included documents that were labeled ‘for official use only,’ and the incident raises concerns about the security of sensitive government information.
Controversies Surrounding Gottumukkala
Gottumukkala has led CISA since May, when DHS Secretary Kristi Noem tapped him to serve as deputy director. During his brief tenure, he has been involved in several controversies. At least half a dozen CISA staffers were placed on leave last year after Gottumukkala failed a polygraph test he had requested, according to Politico. He has since denied failing it, telling a congressman last week that he didn’t "accept the premise of that characterization." The disclosure of ChatGPT use at CISA coincides with the widespread embrace of AI by U.S. workers, with 12% of adults reporting daily use of AI at their job, according to a new Gallup poll.
Implications and Conclusion
The incident highlights the need for robust security measures to prevent data breaches and the risks of using AI platforms with sensitive government information. As the use of AI becomes increasingly common in the workplace, it is essential to ensure that sensitive information is protected and that employees are aware of the potential risks associated with using AI platforms. The Independent will continue to provide updates on this story as more information becomes available. Your support helps us to tell the story and provide high-quality journalism, and we appreciate your contribution to our mission.


