On December 9, OpenAI and Anthropic co-founded the Agentic AI Foundation under the Linux Foundation, alongside Block and backed by Google, Microsoft, AWS, Bloomberg, and Cloudflare. Watching two fierce competitors unite to build open standards is unprecedented—this is the “Linux moment” for AI agents. However, the developer community isn’t buying the collaboration narrative yet. Many see this as “open-washing”: companies donating already-popular tools while keeping their models, training data, and safety infrastructure locked behind closed doors.
Three Standards, Already At Scale
The foundation didn’t launch with promises—it launched with production deployments. Anthropic contributed the Model Context Protocol (MCP), a universal standard connecting AI models to tools and data that already powers 10,000+ active servers and sees 97 million monthly SDK downloads across Python and TypeScript. Moreover, ChatGPT uses it. So do Cursor, Gemini, Microsoft Copilot, and VS Code.
OpenAI donated AGENTS.md, a simple markdown format that tells AI coding agents how to navigate repositories, run tests, and follow project standards. It spread virally: from its August 2025 release to 60,000+ open-source repositories in just four months. Furthermore, Block contributed goose, an AI agent framework that runs locally and works with any LLM—DeepSeek, OpenAI, Google, Anthropic—giving developers control without vendor lock-in.
These aren’t experimental protocols awaiting adoption. They’re already deployed at Fortune 500 companies and embedded in the tools developers use daily. The Agentic AI Foundation provides vendor-neutral governance, ensuring no single company controls their evolution.
Related: AWS Policy AgentCore: Cedar Language Secures AI Agents
The Open-Washing Question
Hacker News developers didn’t celebrate the announcement—they questioned it. One comment cut to the core: “This foundation looks like a governance shell designed to calm developers who are increasingly tired of black-box AI running the show. It’s a pre-emptive shield against criticism rather than genuine collaboration.” Additionally, another noted that OpenAI’s AGENTS.md donation “costs them nothing since it already spread across 60K+ repos.”
The skepticism isn’t cynicism—it’s pattern recognition. The contributed projects were already widely adopted, meaning low cost to the companies involved. Meanwhile, the assets that define competitive advantage remain proprietary: GPT-4’s architecture, Claude’s training data, safety tooling, and decision-making processes. None of that is being opened.
Critics argue this looks like soft self-regulation as governments pressure AI companies about transparency and interoperability. The European Union, U.S. Congress, and other regulators are pushing for open standards. Consequently, launching the AAIF could be strategic positioning: demonstrate collaboration before mandates arrive.
Related: Anthropic Acquires Bun: Claude Code Hits $1B in 6 Months
Why Agentic AI Standards Matter: Fragmentation vs. Interoperability
Without standards, the AI agent ecosystem fragments. Every company builds incompatible systems, preventing developers from composing agents across platforms or switching providers. This isn’t hypothetical—multiple competing protocols already exist: A2A, ACP, ANP—all trying to solve the same interoperability problem.
Research shows “early agent projects developed in silos, each with their own APIs, task formats, and frameworks. This fragmentation made it nearly impossible to compose agents into broader systems.” The cost isn’t just technical—it’s strategic. Therefore, developers building AI agents today face a choice: bet on one vendor and risk lock-in, or wait for standards to emerge and fall behind competitors.
The Linux Foundation and W3C transformed their domains by establishing neutral governance where competitors collaborated on shared infrastructure while maintaining their competitive products. The AAIF follows this model. Eight platinum members—AWS, Anthropic, Block, Bloomberg, Cloudflare, Google, Microsoft, and OpenAI—back the foundation with funding for community programs and protocol development.
What Developers Should Do Now
Don’t wait for standards to mature—MCP, goose, and AGENTS.md are production-ready today. MCP has official SDKs for Python and TypeScript with 97M+ monthly downloads. In addition, AGENTS.md is just markdown: add it to your repository root, and agents read it automatically. Meanwhile, goose runs locally under Apache 2.0 license, supporting any LLM you choose.
The AAIF governance is the safety net, not the starting point. Using these standards now protects you from vendor lock-in as the AI agent ecosystem evolves. The foundation ensures no single company can capture these protocols and turn them proprietary.
However, stay skeptical. Watch whether the AAIF drives genuine openness or merely manages perceptions. The real test isn’t what gets contributed—it’s what stays closed. If models, training data, and safety infrastructure remain locked away, the “open” in open standards rings hollow.
Key Takeaways
The good: Production-ready standards with broad adoption (10K+ MCP servers, 60K+ AGENTS.md repos) now have vendor-neutral governance protecting developers from lock-in.
The skepticism: Companies contributed already-popular tools while keeping competitive assets (models, training data) proprietary. This could be strategic positioning ahead of regulatory pressure.
The action: Start using MCP, goose, or AGENTS.md today. They work now, and AAIF governance ensures they won’t become proprietary.
The verdict: This is either the Linux moment for AI agents—or elaborate open-washing. Time will tell which. Meanwhile, use the standards, but don’t confuse open protocols with open AI.






