Every AI coding tool you’re using right now is vulnerable. Security researcher Ari Marzouk just dropped “IDEsaster”—over 30 CVEs affecting Cursor, GitHub Copilot, Windsurf, Zed, and every other major AI IDE. The finding? 100% of tested tools can be exploited for data theft and remote code execution through prompt injection. If you’ve used AI coding assistants in the past week (and 65% of developers have), your credentials, source code, and build pipelines might already be at risk.
Every AI IDE Failed the Test
Ari Marzouk spent six months testing AI-powered development tools. The result: 30+ vulnerabilities, 24 CVEs assigned, and a 100% failure rate. Cursor, GitHub Copilot, Windsurf, Zed.dev, Roo Code, Junie, Cline—none escaped. “Multiple universal attack chains affected each and every AI IDE tested,” Marzouk told The Hacker News.
This isn’t a bug in one tool. It’s a fundamental design flaw. IDEs were never built for autonomous AI agents that can read, edit, and execute files. Current LLM architectures can’t distinguish between trusted developer instructions and untrusted user input. That makes every AI coding assistant a potential attack vector.
How the Attacks Work
The attack chain is deceptively simple. Attackers plant hidden instructions in places AI assistants scan: README files, configuration files like .cursorrules, even filenames. When your AI reads these, it follows the instructions—because it can’t tell the difference between your prompts and an attacker’s.
Once context is hijacked, exploitation follows. Take CVE-2025-64671, a high-severity flaw in GitHub Copilot’s JetBrains plugin. Open a malicious repository or review a social-engineered pull request, and the attacker can execute arbitrary commands. Your tokens, signing keys, and build pipeline credentials are suddenly accessible.
Cursor’s vulnerabilities are even more elaborate. CVE-2025-54135, dubbed “CurXecute,” exploits an inconsistency in permission checks. Creating a new .cursor/mcp.json file doesn’t require user approval—but editing an existing one does. Attackers trick the AI into writing a malicious Model Context Protocol configuration file, triggering remote code execution without any user interaction.
Then there’s CVE-2025-54136, “MCPoison.” Once you approve an MCP server, Cursor trusts it by name, not by contents. An attacker can modify the configuration after approval, and malicious commands execute silently. It’s a persistent backdoor disguised as a trusted tool.
The most clever trick? Data exfiltration via JSON schema validation. The AI reads sensitive files—your .env, API keys, credentials—then writes output to a JSON file with a remote schema hosted on an attacker’s domain. During validation, your secrets leak out. As one developer put it, “The JSON schema exfiltration trick is genuinely clever because it abuses validation functionality that looks totally benign in logs.”
Supply Chain Nightmare
Developer workstations are critical trust boundaries. Compromise one, and you’ve breached the entire software supply chain. Here’s the scenario: you open a repository, maybe to review a contribution or explore a trending project. A hidden prompt in the README hijacks your AI assistant. It reads your .env file, exfiltrates AWS credentials, and sends them to an attacker-controlled server. You notice nothing. Your credentials are gone.
Now the attacker has access to your private repositories, build pipelines, production systems, maybe even customer data. This isn’t hypothetical. 76% of organizations expose their software supply chain to risk due to inadequate evaluation of AI-generated code, according to Black Duck research. And 65% experienced a supply chain attack in the past year.
The irony: we adopted AI coding tools to ship faster. Now they’re the fastest route to a supply chain breach.
The Root Problem
Marzouk’s research exposes a deeper issue. “All AI IDEs effectively ignore the base software (IDE) in their threat model,” he notes. We bolted AI agents onto existing tools without redesigning security assumptions. LLMs process all text as a single continuous prompt. There’s no technical mechanism to separate trusted instructions from untrusted input.
Marzouk coined a new paradigm: “Secure for AI.” It means designing products with awareness of how AI agents can be manipulated. Traditional security models assume passive tools. AI assistants are active, autonomous, and vulnerable to manipulation. That changes everything.
NIST called prompt injection “generative AI’s greatest security flaw.” OWASP lists it as the number one vulnerability in the LLM Applications Top 10. The industry recognizes the problem. Now tools need to catch up.
What You Should Do
First, update your tools immediately. GitHub Copilot released patches in December 2025. If you’re using Cursor, upgrade to version 1.3.9 or later—it fixes CurXecute and MCPoison. Enable auto-updates on all AI IDEs.
Second, audit permissions. Review what files your AI assistant can access. Restrict sensitive directories like .env, credentials/, and keys/. Disable auto-approve features. Make AI actions require manual review.
Third, change your behavior. Don’t blindly trust AI suggestions. Review code before accepting it. Be suspicious of repositories from unknown sources. Check README files for hidden characters that might contain malicious prompts.
For organizations, treat AI-generated code as untrusted by default. Run static analysis tools like CodeQL, Bandit, or Semgrep on everything. Implement security-focused prompting: instead of “generate user login,” ask for “user login with input validation, bcrypt password hashing, and rate limiting.” Train developers to include security requirements in their prompts.
Finally, maintain audit trails. Track AI usage, prompts, generated code, and security reviews. 95% of organizations use AI for development, but only 24% conduct comprehensive security evaluations. Don’t be part of the 76% exposing themselves unnecessarily.
AI coding assistants aren’t going away. But the era of trusting them blindly is over. IDEsaster proved that every tool is vulnerable until proven otherwise. Update your software, audit your permissions, and treat AI output with the skepticism it deserves.










