On December 9, 2025, the Linux Foundation launched the Agentic AI Foundation with three founding projects: Anthropic’s Model Context Protocol, Block’s goose, and OpenAI’s AGENTS.md. Eight platinum members—AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI—are backing the initiative with $350,000 memberships each. The mission: bring open governance and standardization to agentic AI development. But there’s a problem the foundation inherited along with its flagship project: approximately 1,000 MCP servers are exposed on the public internet with no authorization controls.
Why the Agentic AI Foundation Exists
The foundation exists because agentic AI tooling risks fragmenting into incompatible proprietary systems. Jim Zemlin, Executive Director of the Linux Foundation, framed it optimistically: “Within just one year, MCP, AGENTS.md and goose have become essential tools for developers building this new class of agentic technologies.” That’s true. Model Context Protocol has 10,000+ published servers. AGENTS.md has been adopted by 60,000+ open-source projects. The growth is undeniable. But “within just one year” is also the problem—this ecosystem moved too fast for security to keep pace.
Three Projects Powering the AI Agent Boom
AAIF brings together three distinct projects, each solving a different piece of the agentic AI puzzle. Model Context Protocol, introduced by Anthropic in November 2024, acts as “USB-C for AI applications”—a universal standard for connecting AI models to tools and data. It has three core features: Resources for data retrieval, Tools for actions with side effects, and Prompts for reusable templates. OpenAI officially adopted MCP across ChatGPT’s desktop app in March 2025. Block’s goose is a local-first AI agent framework that autonomously reads and writes files, runs code, and installs dependencies. Developers have used it for real migrations: Ember to React, Ruby to Kotlin. AGENTS.md, released by OpenAI in August 2025, is simpler—just a markdown file that lives alongside README.md, giving AI coding agents project-specific guidance. “Think of it as a README for agents.” It’s supported by Cursor, GitHub Copilot, VS Code, and most major agent frameworks.
These aren’t side projects. They’re production infrastructure powering the AI agent boom. The problem is that infrastructure was deployed before the security fundamentals were solved.
MCP Security Crisis: 1,000 Exposed Servers
Approximately 1,000 MCP servers are exposed on the public internet with no authorization controls. Authorization in the MCP specification is technically optional—OAuth 2.1 is “recommended” but not required. A security researcher put it bluntly: “It’s surprising to see a new core protocol introduced in 2025 where security isn’t ‘secure by default.'” The vulnerabilities are specific and serious. Confused deputy attacks allow agents with legitimate privileges to be manipulated into misusing their authority. Token passthrough issues let servers forward unvalidated tokens to downstream services. CVE-2025-6514, a vulnerability in the popular “mcp-remote” project, enabled full remote code execution on client machines. In mid-2025, a Supabase Cursor agent with elevated access processed support tickets containing malicious SQL instructions and exposed tokens in a public thread.
The scope of the problem is quantifiable. RedHat and Pillar Security found that 7.2 percent of MCP servers contain general vulnerabilities and 5.5 percent exhibit MCP-specific “tool poisoning.” A new MCP specification released on June 18, 2025 attempted to fix these issues by classifying all MCP servers as OAuth 2.0 Resource Servers and requiring token validation before processing requests. But deployment of the updated spec lags far behind adoption of the original.
Trust and Accountability Challenges
Beyond technical vulnerabilities, agentic AI faces trust and accountability challenges that open governance alone can’t solve. Business leaders show confidence in delegating data analysis tasks to AI agents—38 percent trust them for that—but only 20 percent trust agents for financial transactions and only 22 percent for autonomous employee interactions. ISACA captured the core issue: “Agentic AI presents a growing challenge for audit and governance functions, primarily because its decision-making processes often lack clear traceability.” When a privacy breach or error occurs, pinpointing responsibility across developers, deployers, and users is difficult. LLM-driven agents often lack human-readable reasoning unless explicitly programmed to log it. Palo Alto Networks identified the root cause: “The core issue isn’t bad code, but a lack of board-level oversight and clear governance.”
Can Open Governance Fix What Speed Broke?
AAIF’s open governance model is designed to address these problems. Platinum members get board seats and voting power, but technical decisions are made by project maintainers and steering committees, not corporate fiat. That structure should provide stability—projects won’t vanish or pivot based on a single company’s whims. Standardization reduces friction: learn MCP once, use it everywhere. But here’s the tension: eight companies paid $350,000 each for platinum seats while 1,000 servers remain exposed. The foundation’s success will be measured not by adoption metrics, which are already impressive, but by whether it can slow down long enough to get security right.
Developers using these tools need to take security seriously regardless of what the foundation does. Adopt a Zero Trust model for MCP deployments. Use Just-in-Time access with time-limited credentials. Implement comprehensive audit logging. Validate all tool inputs, as the MCP specification requires. The agentic AI boom is real—65 percent of developers use AI coding tools weekly, and 70 percent of agent users report productivity gains. But positive sentiment has dropped to 60 percent in 2025, down from over 70 percent in 2023 and 2024. That gap between adoption and trust is what AAIF needs to close.
The Agentic AI Foundation inherits both the promise and the mess of rapid innovation. Standardization is essential. Open governance beats corporate control. But the industry is once again building infrastructure first and asking security questions later. Whether AAIF can fix that pattern or just formalize it remains to be seen.










