The Hidden Vulnerability
Your AI-powered coding assistant may have just recommended malware. Security researchers at Koi discovered that popular VS Code forks like Cursor and Windsurf recommend extensions that don’t exist in their extension marketplace, creating a dangerous supply chain vulnerability. During a proof-of-concept test, 500 developers installed a placeholder PostgreSQL extension for one reason: their IDE told them to. No verification. No questions asked. Just blind trust in what appeared to be a helpful suggestion.
This isn’t theoretical. It happened. And if you use Cursor, Windsurf, or Google Antigravity, you might be one of them.
How the Attack Works
The vulnerability stems from a licensing quirk. VS Code forks can’t legally use Microsoft’s extension marketplace, so they rely on Open VSX, an open-source alternative maintained by the Eclipse Foundation. The problem? These forks inherit hardcoded “recommended extensions” lists from the original VS Code, pointing to extensions that often don’t exist in Open VSX yet.
Attackers exploit this gap by uploading malicious extensions with matching names to Open VSX. When your IDE displays the “recommended” badge, you install without thinking. And why wouldn’t you? Your productivity tool is suggesting it.
The risk goes beyond nuisance. VS Code extensions have extensive permissions: file system access, credential harvesting from environment variables, network communication, and arbitrary code execution. A malicious extension doesn’t just slow you down. It can exfiltrate your company’s API keys, inject backdoors into your codebase, or compromise your entire development environment.
Vendor Responses: A Mixed Bag
Koi Security reported the vulnerability in late November 2025 through responsible disclosure. The responses varied dramatically.
Cursor deployed a fix on December 1st, removing vulnerable recommendations within days. Google took longer, removing 13 extension recommendations on December 26th and marking the issue resolved on January 1st, 2026. Both companies treated this seriously.
Windsurf? As of January 6th, they haven’t responded to the researchers. Radio silence.
When your productivity tool becomes an attack vector, response time reveals priorities. Developers deserve vendors who take security as seriously as features.
Part of a Bigger Pattern
This isn’t an isolated incident. It’s part of an accelerating trend: attackers are shifting focus from production systems to the developers who build them. The logic is sound. Compromise one developer’s machine, and you potentially gain access to an entire company’s codebase, credentials, and infrastructure.
The numbers back this up. In 2025, 30% of data breaches involved supply chain components, a 100% increase from the previous year. Recent attacks like Shai-Hulud 2.0 compromised over 25,000 repositories by targeting npm package maintainers. PyPI and GitHub continue battling waves of malicious packages designed to steal developer credentials.
AI-powered coding tools like Cursor and Windsurf are exploding in popularity, with millions of developers adopting them to boost productivity. But this rapid growth creates tension between innovation speed and security rigor. Moving fast is valuable. Breaking your users’ development environments is not.
The VS Code fork vulnerability highlights a systemic issue: trust has become a liability. When developers automatically accept AI recommendations, assume marketplace vetting, or trust that “recommended” means “safe,” they create attack surface. And attackers are learning to exploit that trust at scale.
What Developers Should Do Now
First, audit your installed extensions immediately. Open your IDE’s extension panel and review what’s actually running in your environment. Look for unfamiliar extensions, especially ones you don’t remember installing manually. If you use Cursor, Windsurf, or Google Antigravity, pay special attention to recently added extensions.
Second, verify extension sources before installing. Check for the verified publisher badge, but don’t rely on it exclusively. Research has shown that 7% of harmful extensions still had verified badges. Review the extension’s ratings, download count, and recent reviews for red flags. If something feels off, skip it.
Third, question IDE recommendations. Your coding assistant doesn’t security-vet suggestions. It’s software making automated suggestions based on patterns, not a security team approving each recommendation. Treat recommendations as hints, not endorsements.
Fourth, review extension permissions. Does a linting tool really need network access? Should a theme extension request file system write permissions? If permissions seem excessive for the stated functionality, that’s a warning sign.
Finally, enable privacy and security features in your tools. Cursor and Windsurf both offer privacy modes and “zero data” options, but they’re not enabled by default. If you’re working with sensitive code or company intellectual property, configure these settings proactively.
Trust Is Earned, Not Assumed
The VS Code fork vulnerability teaches a crucial lesson: convenience and security exist in tension. AI coding tools promise extraordinary productivity gains, and they deliver. But automation doesn’t eliminate the need for security judgment. It amplifies it.
As AI becomes deeply embedded in developer workflows, trust becomes the attack surface. The developers who thrive will be those who embrace AI’s capabilities while maintaining healthy skepticism about its recommendations. Your productivity tool is only as secure as the vigilance you bring to using it.
Five hundred developers learned this lesson the hard way. Don’t be the next one.












