
MCP adoption has moved faster than anyone expected. The Model Context Protocol now powers over 10,000 public servers and clocks 97 million monthly SDK downloads. Every major AI provider — Anthropic, OpenAI, Google, Microsoft — is on board. Cursor, Zed, and Windsurf embed MCP clients natively. Enterprise teams are connecting it to production databases, file systems, and cloud credentials. And according to research published this spring, 1,800 of those public servers have no authentication whatsoever.
That’s not the worst of it.
The STDIO Flaw: A Design Choice Anthropic Won’t Fix
In April 2026, OX Security published what they called “The Mother of All AI Supply Chains” — a vulnerability audit that found a fundamental flaw in MCP’s STDIO transport, the default mechanism for connecting AI agents to local tools. STDIO executes operating system commands without sanitization or validation. The flaw isn’t in a specific implementation. It’s in the spec itself, and it propagated into every official SDK: Python, TypeScript, Java, and Rust.
OX researchers found 7,000 servers on public IPs with STDIO transport active and estimated 200,000 vulnerable instances total. They confirmed arbitrary command execution on six live production platforms with paying customers. The CVE list covers LiteLLM, LangFlow, Flowise, Windsurf, and others — more than 10 vulnerabilities rated high or critical. CVE-2026-30615 (Windsurf) required zero user interaction to exploit.
Anthropic’s response: the behavior is by design. Input sanitization is the developer’s responsibility.
That answer is technically accurate and practically inadequate. Every downstream project that trusted the official SDKs inherited this risk. If you built an MCP server using any of the official libraries and you’re running STDIO transport, you need to add input sanitization now — not after the next spec update.
The Authentication Gap: Setup Guides Predate the Security Model
OAuth wasn’t added to the MCP spec until March 2025. A lot of teams set up their first MCP servers before that, using documentation that simply didn’t include authentication. Those servers are still running.
A review of first-wave enterprise MCP deployments (Q4 2025-Q1 2026) found that 9 out of 12 had OAuth disabled or still in development mode. The root cause wasn’t negligence — it was teams following guides that predated the security model. The outcome is the same either way: 1,800+ servers sitting on the public internet, fully functional, accepting any connection.
An unauthenticated MCP server connected to production data doesn’t just expose an API endpoint. It exposes everything the AI assistant has permission to touch — which, for teams that skipped least-privilege scoping, can mean databases, file systems, and cloud credentials in a single connection.
Two More Attack Surfaces You Probably Haven’t Secured
The authentication gap and the STDIO flaw get the headlines, but two other attack vectors are generating active incidents in 2026.
Supply chain. MCP servers are distributed via npm and PyPI with no universal verification. Between April 21 and April 23, 2026, three coordinated supply chain attacks hit npm, PyPI, and Docker Hub in a 48-hour window — all targeting developer secrets: API keys, cloud credentials, SSH keys. An analysis by MCPwn found that every exploited MCP server had a single maintainer or fewer than three, and all scored below 55 on commit activity. If you’re pulling MCP server packages from the public registry without pinning versions or verifying checksums, you’re trusting an ecosystem that’s actively being targeted.
Prompt injection. Tool poisoning embeds hidden instructions in MCP tool descriptions — visible to the model, invisible to you. An agent that loads a poisoned tool description will execute the injected instructions as legitimate user commands. The injection surface includes code comments, README files, and any external content the agent reads during context loading. Palo Alto Networks Unit 42 documented new attack vectors through the MCP sampling interface in 2026. This is not a theoretical concern; it’s an active exploitation pattern.
What to Do Now
The 2026 MCP roadmap, published this month, targets stateless horizontal scaling and a .well-known metadata standard for server discoverability. Those are real improvements for production deployments. They don’t fix the security gap you have today.
Here’s the short list:
- Enable OAuth 2.1 immediately on any MCP server with internet exposure. Short-lived JWTs (one hour or less), mandatory audience claim validation matching your server address.
- Scope tokens to minimum permissions. Filter the tools/list response based on JWT scope claims. Enforce scope checks inside each tool handler as a second layer.
- Implement input sanitization on STDIO. Anthropic won’t patch this upstream. Every argument that reaches your OS shell needs to be treated as untrusted input.
- Audit your MCP dependencies. Pin npm and PyPI package versions. Verify checksums. Treat single-maintainer MCP packages with the same caution you’d apply to any unreviewed dependency.
- Treat tool metadata as untrusted. Validate all tool descriptions before the model sees them. External tool metadata is an injection surface.
- Never log authorization headers or token values. Tokens appearing in application logs are a primary credential theft vector.
MCP has reached the stage where it’s easy to deploy and genuinely useful. That’s exactly when the security debt catches up. If your team shipped an MCP integration in the last 12 months without a security review, run one now.













