Caitlin Kalinowski resigned from OpenAI on March 7, 2026, quitting her role as head of robotics in direct response to the company’s Pentagon deal. In her resignation statement, the hardware executive said “surveillance of Americans without judicial oversight and lethal autonomy without human authorization are lines that deserved more deliberation than they got.” Her departure marks the first major executive to walk away from OpenAI over its military partnership—and it happened just four months after she joined the company.
This isn’t just any engineer making noise. Kalinowski spent six years at Apple leading development of the Mac Pro and MacBook Air, then 11 years at Meta spearheading the Quest 2 and Orion AR glasses. When someone with that pedigree quits over ethics after four months, it’s a signal about how rushed and controversial the OpenAI Pentagon deal really was.
The Three-Day Deal After Anthropic Got Blacklisted
OpenAI’s Pentagon announcement didn’t happen in a vacuum. Defense Secretary Pete Hegseth gave Anthropic an ultimatum on February 24: drop AI safeguards against mass surveillance and autonomous weapons, or face designation as a “supply-chain risk.” Anthropic, which had held a $200 million Pentagon contract since 2025, refused. On February 27, President Trump ordered federal agencies to stop using Anthropic products. On February 28—three days after the ultimatum—OpenAI announced its own Pentagon deal.
The timing was, in CEO Sam Altman’s own words, “opportunistic and sloppy.” After internal and external backlash, OpenAI revised the deal on March 3 to exclude intelligence agencies. But the damage was done. Employees saw a major ethical decision rushed through in three days, without adequate deliberation, right after a competitor got punished for maintaining principles. That’s exactly what Kalinowski criticized: deals this important shouldn’t be rushed.
Surveillance and Autonomous Weapons: The Red Lines
Kalinowski’s resignation centers on two specific red lines: mass domestic surveillance without judicial oversight, and fully autonomous weapons that make kill decisions without human authorization. These aren’t abstract concerns—they’re concrete AI applications the Pentagon explicitly wants and Anthropic refused to enable.
In January 2026, the Pentagon issued its AI Acceleration Strategy mandating an “AI-first warfighting force” with all contracted AI models available for “all lawful purposes.” No restrictions. The problem with “lawful purposes” is the loophole: if it’s legal or classified, it’s allowed. Kalinowski’s point isn’t that OpenAI’s revised deal technically prohibits these uses—it’s that the company didn’t deliberate adequately before signing terms that could enable them.
The Electronic Frontier Foundation put it bluntly: “Corporate ethical commitments currently function as the only practical restraint on certain categories of military AI use, a fragile arrangement that this dispute is actively testing.” When government pressure meets corporate ethics, which wins?
Employee Dissent and User Exodus
Kalinowski isn’t alone in opposing the deal. Around 100 OpenAI employees signed a public statement supporting Anthropic’s position—a rare move in corporate tech. Research scientist Aidan McLaughlin posted publicly: “i personally don’t think this deal was worth it.” CNN reported that many employees “really respect” Anthropic for standing up to the Pentagon and feel frustrated with how OpenAI handled the contract.
Users reacted, too. ChatGPT uninstalls surged 295% after the Pentagon deal announcement, and Claude climbed to the top of the App Store charts. The #QuitGPT movement gained traction on social media. Meanwhile, Anthropic CEO Dario Amodei called OpenAI’s messaging around the deal “straight up lies” and “mendacious” in an internal memo to his own employees.
This isn’t a minor PR hiccup. OpenAI’s Pentagon partnership damaged trust with both employees who built the technology and users who rely on it.
What Happens Next
Three questions remain open. First, will more OpenAI employees resign? Kalinowski is the first major executive to leave, but roughly 100 employees have publicly opposed the deal. If more follow her lead, OpenAI faces a talent retention crisis. Historical precedent exists: in 2018, over 4,000 Google employees protested Project Maven, dozens resigned, and Google canceled the military contract.
Second, will Anthropic reach a deal or stay blacklisted? As of March 5, the company is back at the negotiating table with the Pentagon, but CEO Amodei is holding firm on red lines against surveillance and autonomous weapons. The company faces an existential threat from the government blacklist, but caving would undermine its positioning as the ethics-first AI company.
Third, will Congress regulate AI military contracts? The EFF and Center for American Progress both argue that corporate self-governance isn’t enough—Congress needs to establish legal frameworks for what AI can and can’t do in military contexts. Right now, company policies are the only restraint on surveillance and autonomous weapons. That’s a fragile arrangement when the Pentagon can threaten companies with blacklisting.
Key Takeaways
- Caitlin Kalinowski, a hardware executive with nearly 20 years at Apple and Meta, resigned from OpenAI after just four months over the company’s Pentagon deal—the first major executive to quit over the partnership
- OpenAI announced its Pentagon deal three days after Anthropic got blacklisted for refusing to drop AI safeguards, a timeline CEO Sam Altman admitted looked “opportunistic and sloppy”
- The controversy centers on two red lines—mass surveillance without judicial oversight and fully autonomous weapons—that the Pentagon wants to enable and Anthropic refused to allow
- Around 100 OpenAI employees publicly signed a statement opposing their own company’s deal, while ChatGPT uninstalls surged 295% and users switched to Claude en masse
- The AI industry faces a crossroads: follow Anthropic’s ethics-first path (and risk government retaliation) or follow OpenAI’s compromise (and risk losing employee and user trust)
The next few weeks will reveal whether employee ethics or government contracts win in AI’s relationship with military power. Kalinowski made her choice. Now the rest of the industry has to make theirs.

