After two years fighting an “AI slop tsunami,” cURL creator Daniel Stenberg shut down the project’s bug bounty program on January 21, 2026. The decision affects one of the internet’s most critical infrastructure projects—installed on 20 to 50 billion devices worldwide—and sets a precedent that could reshape how open source projects handle security research in the AI era.
The Breaking Point
In just the first 21 days of 2026, cURL received 20 AI-generated bug reports. Seven arrived in a single week. None described actual vulnerabilities. Each required significant time from the volunteer security team to properly assess and dismiss.
“We have concluded the hard way that a bug bounty gives people too strong incentives to find and make up ‘problems’ in bad faith,” Stenberg wrote in the GitHub PR announcing the shutdown. The program, which distributed $86,000 across 78 confirmed vulnerabilities over six years, ends at the end of January.
When AI Help Becomes AI Harassment
The problem escalated rapidly. In January 2024, Stenberg first complained about AI-generated reports that mixed facts from old security issues into fictional new vulnerabilities. By mid-2025, roughly 20 percent of all submissions were AI slop. In July, submissions surged to eight times the normal rate.
Stenberg tried fighting back. In May 2025, he added an instant ban policy for AI-generated submissions. It didn’t work. The financial incentive—bug bounties paid for discovered vulnerabilities—made the noise unstoppable.
The Register notes that in six years of AI-assisted reports, not a single one discovered a genuine bug when AI was used alone.
The Exception That Proves the Rule
There’s nuance here. In September 2025, security researcher Joshua Rogers submitted a massive list of potential issues found using AI-assisted tools. He discovered 50 real bugs. Stenberg called them “actually, truly awesome findings.”
The difference? Rogers used AI as a research assistant while doing proper security work himself. The slop came from people using AI alone, chasing bounty payments without understanding what they were submitting.
As Cybernews reported, AI can find real bugs—when wielded by researchers who know what they’re doing.
Why This Matters Beyond cURL
cURL isn’t just another open source project. It ships bundled in Windows 10 and higher, macOS since 2001, and Android devices from Samsung, Xiaomi, and OPPO. YouTube, Instagram, Skype, and Spotify all use it. Eight of the top-10 car brands rely on cURL, with over 100 million vehicles worldwide containing the software.
This is the first major infrastructure project to shut down a bug bounty program because of AI-generated noise. It won’t be the last.
The bug bounty industry is already struggling. A recent case saw HackerOne allegedly ghosting a researcher for months over an $8,500 payout. Meanwhile, the industry reported a 210 percent increase in valid AI-related vulnerability reports in 2025 compared to 2024—proving demand exists—but platforms haven’t solved the filtering problem.
The Unsustainable Volunteer Model
Stenberg will continue accepting and fixing genuine security vulnerabilities. He just won’t pay for them. The question is whether legitimate security researchers will still report issues without financial incentive.
Open source funding was already a crisis before AI arrived. Now AI has made volunteering harder by flooding maintainers with garbage they must manually filter. The irony is sharp: the technology pitched as “helping developers” has created a denial-of-service attack on critical internet infrastructure.
Other projects face the same calculation. Remove financial incentives and potentially lose legitimate researchers. Keep paying and drown in AI slop. There’s no obvious third option yet.
What Comes Next
This sets a precedent. When a project running on tens of billions of devices decides bug bounties are unsustainable in the AI era, other maintainers will notice. Some will follow. The security community will need new models—reputation-based systems, AI-resistant platforms, or something not yet invented.
Stenberg noted that some AI submitters were just “ordinary misled humans” who deserve compassion rather than ridicule. The problem isn’t individual bad actors. It’s a structural issue: financial incentives plus AI automation equals a broken system.
AI can assist security research. Rogers proved that. But the combination of bounty payments and AI-generated volume has made the traditional bug bounty model untenable for volunteer-maintained projects. That’s the real story here—not that AI is inherently bad, but that our incentive structures couldn’t handle it.











