Madhu Gottumukkala, the acting director of the US Cybersecurity and Infrastructure Security Agency (CISA), uploaded at least four sensitive government documents marked “for official use only” to public ChatGPT last summer, Politico revealed this week. The uploads triggered automated security alerts in early August 2025 and launched a federal investigation. The supreme irony: the person responsible for protecting America’s critical infrastructure from cyber threats couldn’t resist using an AI tool his own agency blocks for security reasons.
What Happened: August Uploads, January Revelation
The incident unfolded in August 2025 when Gottumukkala uploaded at least four government contracting documents to the public version of ChatGPT. While not classified, these files carried the “for official use only” (FOUO) designation—sensitive procurement details and internal procedures meant to stay within government channels. Multiple automated security alerts fired in the first week of August, triggering a Department of Homeland Security-level review to determine harm to government security.
The data left federal infrastructure, landed on OpenAI’s servers, and could potentially be accessed or used for model training. The automated alerts worked perfectly. Yet no action was taken until Politico exposed it publicly six months later, on January 27-28, 2026.
The Pattern: Failed Polygraph, Permission Abuse
This wasn’t an isolated lapse in judgment. Gottumukkala personally requested and received special permission to use ChatGPT in May 2025, shortly after joining CISA, despite the tool being blocked for most DHS employees due to security concerns. The permission was described as “temporary” and “limited-scope.” Three months later, he was uploading sensitive documents.
More troubling: in July 2025, before the uploads, he reportedly failed a counterintelligence polygraph test required for accessing highly sensitive intelligence programs. A DHS official told Politico: “He forced CISA’s hand into letting him use ChatGPT, and then he abused it.” When CISA’s spokesperson claimed Gottumukkala “last used ChatGPT in mid-July,” the evidence showed August uploads—raising transparency concerns.
The Shadow AI Epidemic
Gottumukkala’s case isn’t unique—it’s symptomatic. According to 2026 data, the average organization experiences 223 data policy violations involving generative AI applications every month. The top quartile sees 2,100 incidents monthly. Twenty percent of organizations suffered data breaches from shadow AI in the past year, per IBM research.
Jennifer Ewbank, former CIA deputy director for digital innovation, explained the phenomenon: “Shadow AI happens when there is an urgency around work, available tools don’t meet the need and there’s ambiguity over rules.” Public AI tools like ChatGPT are so powerful that even cybersecurity experts rationalize using them despite policies. Productivity trumps security, and technology alone can’t fix human judgment failures.
If the CISA director can’t resist, what chance do regular employees have? This exposes a cultural problem where convenience beats security at every level—even at the agency setting security standards for the entire federal government.
The $1 Solution Nobody Used
Here’s the kicker: the federal government has access to ChatGPT Enterprise for just $1 per year per agency through OpenAI’s federal offering, with full data protection guarantees—no training on inputs, audit logs, isolated infrastructure. DHS also runs an internal AI chatbot entirely on federal infrastructure, though it’s less capable than ChatGPT.
When data is uploaded to public ChatGPT, OpenAI can access and use it for model training. The enterprise version prevents this. A $1 solution existed that would have prevented this entire incident. The fact that the CISA director chose the public, unsecured version over approved options reveals either ignorance—unlikely for a cybersecurity chief—or willful disregard for security.
Organizations must provide AI tools that are both secure AND capable, or users will bypass security for convenience. Every time.
What This Reveals
The cybersecurity community reacted with shock and frustration. On Hacker News, the story hit 400+ points and 210+ comments, with reactions ranging from “supreme irony” to “if CISA can’t protect data, who can?” Cybersecurity experts say the case could accelerate efforts to formalize AI governance across federal agencies.
This isn’t just about one official’s mistake—it’s about trust, leadership, and security culture. When the person setting security standards can’t follow them, it erodes confidence in the entire system. The public debate centers on whether this is human nature—everyone wants AI convenience—or disqualifying incompetence. The pattern of behavior suggests the latter: failed polygraph, permission abuse, policy violation.
The DHS investigation continues as of January 30, 2026. The answer to what happens next may determine the future of AI adoption in government: will agencies lock down harder, or recognize that better alternatives are needed?
Key Takeaways
- The acting CISA director uploaded sensitive “for official use only” documents to public ChatGPT in August 2025, triggering automated security alerts and a federal investigation revealed this week
- This follows a pattern: May 2025 ChatGPT access request, July 2025 failed counterintelligence polygraph, August 2025 document uploads—raising serious vetting and leadership questions
- Shadow AI is epidemic: 223 monthly violations per organization on average, 20% suffered breaches, even executives can’t resist the convenience-security trade-off
- A $1/year ChatGPT Enterprise option with full data protection existed but wasn’t used, exposing the gap between policy and practice when approved tools lag in capabilities
- Technology can detect violations—automated alerts worked perfectly—but security culture determines whether anyone acts before public exposure












