Before mid-2025, AI coding assistants spoke different languages. Claude Code demanded CLAUDE.md. Google’s Gemini wanted GEMINI.md. GitHub Copilot looked for .github/copilot-instructions.md. This fragmentation frustrated developers switching between tools or collaborating across teams using different AI assistants. Then came AGENTS.md—a vendor-neutral standard that emerged this year from collaboration between OpenAI, Google, Cursor, and Sourcegraph. By August 2025, over 20,000 projects adopted it. By December, that number hit 60,000+. GitHub Copilot’s official support announcement in August solidified AGENTS.md as the de facto industry standard for guiding AI coding agents.
This is a rare case where open standards beat vendor lock-in—and it happened fast. More than 60% of developers now use AI coding assistants, and AGENTS.md solves the portability problem with one file that works across 20+ different agents.
What AGENTS.md Is: A README for AI Agents
AGENTS.md functions as a “README for AI agents”—a single Markdown file that provides detailed context to help coding agents work effectively. Think build steps, testing workflows, and code conventions. It uses plain Markdown with no required fields, supports hierarchical structures in monorepos where subdirectory files override root configurations, and works across 20+ agents including Cursor, Copilot, OpenAI Codex, Google Jules, Devin, and more.
Real implementations show the simplicity. The official AGENTS.md repository demonstrates developers include dev environment tips like pnpm dlx turbo run where <project_name>, testing instructions such as pnpm turbo run test --filter <project_name>, and code style guidance preferring functional components like Projects.tsx over class-based patterns like Admin.tsx. OpenAI’s main repository alone contains 88 AGENTS.md files, showing enterprise-scale adoption in action.
The key differentiation from README.md is audience focus. READMEs serve humans with quick starts and contribution guidelines. AGENTS.md serves agents with technical precision. This separation keeps READMEs concise while giving AI assistants the detailed operational guidance they need to avoid mistakes.
Related: GitHub Agent HQ Opens Copilot to Third-Party AI Agents
How AGENTS.md Won: The Standardization Story
The fragmentation problem came to a head in May 2025 when developers voiced frustration online about maintaining separate instruction files for each AI tool. By June and July, collaborative standardization efforts began across OpenAI Codex, Sourcegraph’s Amp, Google’s Jules, Cursor, and Factory. The tipping point came in August when 20,000 open-source projects adopted the format, and GitHub Copilot officially announced support. By late 2025, adoption reached 60,000+ projects. The format is now stewarded by the Agentic AI Foundation under the Linux Foundation.
GitHub’s analysis of 2,500+ repositories reveals evolving best practices. Their team found the most effective AGENTS.md files “put executable commands early with full flags, give agents a clear persona with detailed operating manual, and start simple then iterate when agents make mistakes.” The Phoenix Framework now auto-generates AGENTS.md in new projects, signaling it’s becoming infrastructure rather than optional configuration.
This standardization story stands out. Unlike typical tech standards wars—USB-C took years, HDMI involved lawsuits—AGENTS.md was resolved through collaboration in months. Developers refused vendor lock-in, and vendors listened. That rarely happens.
How to Create Effective AGENTS.md Files
Creating an effective AGENTS.md starts simple: commands, testing, code style. GitHub’s analysis of 2,500+ repositories shows key patterns. First, put executable commands early with full flags and options, not just tool names. Second, include concrete code examples showing good versus bad patterns. Third, explicitly state boundaries like “NEVER modify /legacy files” or “NEVER commit API keys.” Fourth, start minimal and add detail when agents make mistakes.
A minimal starter for new projects looks like this:
# Commands
- `npm run dev` - Start dev server
- `npm test` - Run tests
- `npm run lint` - Check code style
# Testing
Run full test suite before committing: `npm test && npm run lint`
For complex production projects, scale up with explicit guidance:
# Development Environment
- Node 18+ required, use pnpm not npm
- Run `pnpm install` in root, then `pnpm turbo run build`
# Code Style
- Prefer functional components with hooks (see `Projects.tsx`)
- Avoid class-based components (avoid `Admin.tsx` pattern)
- Forms: copy `app/components/DashForm.tsx`
# Security Rules
- NEVER commit API keys or secrets
- NEVER modify files in `/legacy` directory
The beauty is simplicity. No special syntax, no schema validation, just Markdown. Developers actually maintain files they can read and write easily. Starting minimal matches how teams actually work—add complexity only when needed, based on real agent behavior.
Why This Matters for Future AI Tooling
AGENTS.md’s success reveals lessons for future AI tool standards. Simplicity wins—Markdown beats rigid schemas. Open governance builds trust—Linux Foundation stewardship signals neutrality. Backwards compatibility eases adoption—agents still read legacy CLAUDE.md and GEMINI.md files during transition. Developer pain drives adoption—with 60%+ using AI assistants, fragmentation hurt enough to motivate change.
Community debate on Hacker News surfaces an interesting perspective. Some developers argue “fixing your README is more important than adopting any new standard” and that AGENTS.md is “less about helping AI agents and more about tricking developers into writing better documentation.” This critique actually reveals AGENTS.md’s hidden benefit. It’s a documentation standard that convinces even documentation-averse developers to document their code. If better docs happen as a side effect of supporting AI tools, everyone wins.
As AI tools proliferate beyond coding—design agents, documentation agents, testing agents—AGENTS.md provides a blueprint: developer-driven, vendor-neutral, stupidly simple. Future standards should follow this pattern. The alternative is repeating the fragmentation cycle with DESIGNERS.md, TESTING.md, and a dozen other vendor-specific variants.
One notable holdout remains: Anthropic’s Claude Code hasn’t officially adopted AGENTS.md yet, despite community pressure. The community expects adoption eventually. When 60,000+ projects use a standard and every major competitor supports it, holding out becomes expensive.
Key Takeaways
- AGENTS.md solves AI coding agent fragmentation with a vendor-neutral standard—one file that works across 20+ agents including Cursor, Copilot, OpenAI Codex, and Google Jules
- 60,000+ projects adopted it by late 2025, growing from zero to 20,000 in just three months (June-August), showing rapid community acceptance
- Start simple with commands, testing, and code style—iterate when agents make mistakes, following GitHub’s analysis of 2,500+ real repositories
- Migration is trivial: rename CLAUDE.md or GEMINI.md to AGENTS.md, create symlinks for backwards compatibility with tools that haven’t updated yet
- Open standards won this round: developer-driven collaboration beat vendor lock-in through simplicity (Markdown), open governance (Linux Foundation), and backwards compatibility
As AI coding assistants become standard tooling for most developers, AGENTS.md demonstrates how interoperability standards should work. The lesson extends beyond coding agents. Any AI tool category facing fragmentation should study this case: listen to developer pain, collaborate on simple solutions, avoid vendor lock-in, and let open standards win.




