Cal.com closed its open source repository yesterday after five years, citing AI-powered security threats. The company points to Anthropic’s Mythos AI—which discovered a 27-year-old OpenBSD vulnerability in April—as proof that open source code is too risky in the AI era. The problem with this logic? AI can already analyze compiled binaries. Cal.com just became the first major open source project to justify going closed with “AI security.” Whether that’s foresight or fear-mongering will set the tone for every commercial open source project watching.
What Happened
On April 15, Cal.com announced it’s moving its core codebase to a closed repository. After five years as a flagship open source scheduling platform—the go-to Calendly alternative for self-hosters—the company says AI has fundamentally changed the security calculus.
CEO Bailey Pumfleet’s reasoning: “Open source code is basically like handing out the blueprint to a bank vault. And now there are 100× more hackers studying the blueprint.”
Cal.com isn’t abandoning open source entirely. The company launched Cal.diy, a fully MIT-licensed fork for hobbyists and developers who want to self-host. But the main Cal.com product—handling “high stakes data” for enterprise customers—is now closed. The split strategy: open for experimentation, closed for money.
The timing matters. This announcement came eight days after Anthropic revealed Mythos Preview, an AI model that finds and exploits zero-day vulnerabilities across every major operating system and browser. Cal.com is positioning Mythos as the catalyst. But was this decision reactive or was Mythos a convenient justification for a planned business shift?
The Mythos Threat Is Real
Anthropic announced Mythos Preview on April 7, and the capabilities are unsettling. The model identified a 27-year-old denial-of-service vulnerability in OpenBSD’s TCP SACK implementation—a subtle integer overflow that lets remote attackers crash any OpenBSD host. OpenBSD has an obsessive focus on security, and this bug hid for nearly three decades.
Mythos also caught a 16-year-old FFmpeg vulnerability that automated testing tools had scanned five million times without flagging. During testing, Anthropic found Mythos could discover and exploit zero-days in every major OS and browser. By February 2026, the company reported over 500 high-severity vulnerabilities in production open source codebases—bugs that survived years of expert review.
Anthropic restricted Mythos access immediately, launching Project Glasswing to coordinate vulnerability patching with Amazon, Apple, Broadcom, Cisco, CrowdStrike, Microsoft, and others before releasing exploit details publicly. The AI security arms race is no longer theoretical.
The Security Through Obscurity Problem
Here’s where Cal.com’s argument falls apart. Closing source code doesn’t stop AI from finding vulnerabilities—it just removes transparency. IBM’s analysis of the post-Mythos landscape is blunt: “Organizations that weather the storm are those with access to source, the tooling to patch, and the community infrastructure to respond at speed.”
The foundation of security through obscurity is eroding because AI models can already reverse-engineer compiled binaries. As security researchers put it: “Anyone with a frontier model can point it at a compiled binary and ask what it does.” Closed source code is opaque to humans, not to AI.
Open source has always relied on community scrutiny to find and fix vulnerabilities faster than attackers can exploit them. Closing the code slows that process. Vulnerabilities in closed source software can linger until the vendor patches them—or until an attacker discovers them first. Cal.com’s CEO argues there are “100× more hackers” analyzing blueprints now, but closing the blueprint doesn’t make the vault harder to crack when AI can study the lock anyway.
The counterargument is that open code gives AI a head start. Maybe. But Mythos found decades-old bugs in OpenBSD—one of the most scrutinized codebases in existence. If transparency didn’t expose those vulnerabilities to human review in 27 years, the problem isn’t openness. It’s that security review didn’t scale with codebase complexity.
The Precedent That Matters
Cal.com is the first major commercial open source project to explicitly cite AI threats as the reason for going closed. If this strategy works—if enterprise customers accept the security rationale and Cal.com’s business thrives—expect others to follow.
The global open source software market is worth $49 billion. Commercial OSS companies have strong incentive to protect market position, and “AI security” is a more palatable justification than “we want to prevent forks” or “open core didn’t generate enough revenue.”
But Cal.diy’s existence complicates the narrative. If the code is too dangerous to remain open for enterprise use, why is it safe enough for hobbyists? The split suggests this is less about absolute security and more about risk segmentation. Hobbyists experimenting with scheduling tools aren’t high-value targets. Enterprise deployments handling customer data are.
That’s a reasonable business calculation. But framing it as “open source is collapsing under AI-powered threats”—the title of Cal.com’s press release—overstates the case. Open source isn’t collapsing. One company decided the liability of maintaining open code for paying customers outweighs the benefits.
What Cal.com Is Really Protecting
Strip away the AI security framing and other motivations emerge. Closing source prevents competitors from forking the codebase and undercutting Cal.com on price. It signals “enterprise-grade” to buyers who equate proprietary software with support and accountability. It reduces legal liability if a vulnerability in open code leads to a customer breach.
None of those are bad reasons. Commercial open source sustainability is hard, and companies need viable business models. But if the real driver is business strategy, say that. Using Mythos as justification eight days after the announcement feels opportunistic.
The market will decide whether Cal.com made the right call. If customers migrate to alternatives like Easy!Appointments or Rallly—both still open source—it signals the community values transparency over Cal.com’s security theater. If Cal.com’s enterprise revenue grows, it validates the closed model and others will copy it.
The answer matters for every commercial open source project. Because if “AI security” becomes the go-to justification for closing code, we’re headed for a wave of exits from openness—not because it’s less secure, but because it’s less profitable to defend.






