“Writing a good Claude.md” trended on Hacker News this week as developers create special .claude/CLAUDE.md files giving AI coding assistants project-specific context. This marks a fundamental shift from documentation written for human onboarding to documentation optimized for machine consumption—part of the broader movement from “vibe coding” to context engineering that reshaped software development in 2025. The question isn’t whether this is happening. It’s whether optimizing codebases for AI rather than humans creates better engineering or just creates dependency.
From Vibe Coding to Context Engineering
In February 2025, Andrej Karpathy coined “vibe coding” and it took the industry by storm. By November 2025, that same industry shifted from loose experimentation to systematic “context engineering”—managing what data, knowledge, tools, and structure AI models receive. Unlike prompt engineering, which focuses on how to ask questions, context engineering is about what environmental setup the AI gets.
The shift delivered concrete results. Organizations implementing context engineering see development cycles shorten by 55% and bug rates drop 40%, according to MIT Technology Review analysis. The transition reveals that AI-assisted development matured from experimental to production-critical this year. Developers who don’t understand context engineering are falling behind teams that optimize codebases for AI consumption.
Modern approaches favor “just in time” context strategies. Rather than pre-processing all data upfront, systems maintain lightweight identifiers—file paths, stored queries, web links—and dynamically load context at runtime. Anthropic’s Claude Code uses this for complex database analysis, writing targeted queries and leveraging Bash commands without loading full data objects into context. This is systematic optimization, not vibes.
Inside Claude.md: Project Context for AI
CLAUDE.md is a Markdown file Claude Code automatically ingests before starting work, providing persistent project context across sessions. It functions as version-controlled configuration defining code style, architectural decisions, common commands, and non-obvious patterns. The hierarchical system—home directory, project root, subdirectories—lets developers set global preferences, team standards, and module-specific context.
Best practices emphasize conciseness. Keep files under 100-200 lines due to token budget concerns. Use bullet points, not narratives, for machine-parseable structure. The /init command auto-generates starter CLAUDE.md files from codebase analysis. Custom commands in .claude/commands/ become slash commands. The community wisdom: “If you’re repeating instructions across sessions, it belongs in CLAUDE.md.”
Here’s what a minimal CLAUDE.md looks like:
# Project: API Gateway
## Tech Stack
- Python 3.11+, FastAPI, PostgreSQL
- Docker for local development
## Code Style
- Use f-strings for formatting
- Max line length: 100 chars
- Prefer explicit over implicit
## Common Pitfalls
- Database connections need connection pooling
- Rate limiting applies per endpoint, not globally
This is developers admitting traditional README.md wasn’t serving its purpose. If you need a separate file for AI to understand your project, either AI is incompetent or documentation was always ambiguous and poorly structured. Evidence suggests the latter. CLAUDE.md forces clarity through conciseness and structure that benefits both AI and humans—it’s not replacing documentation, it’s revealing documentation’s flaws.
The Model Collapse Threat
Research published in Nature in July 2024 demonstrates “model collapse”—AI models trained on AI-generated content develop irreversible defects and forget true data distributions. The process has two stages: early collapse loses minority data from distribution tails, late collapse brings significant performance loss and concept confusion. If AI writes CLAUDE.md files that train next-generation AI models, documentation quality enters a death spiral.
The mechanism is straightforward: errors in one model’s output get included in training its successor, compounding with each generation. As the study warns, “As more AI-generated content proliferates across the web, it might be used to train future models instead of human-generated data, potentially precipitating widespread issues.” Prevention requires tracking data provenance, preserving original sources, and combining AI data with real data. Training remains stable if the initial model is well-trained and a sizeable chunk of data is real, not synthetic.
Related: Pre-AI Search Tools Filter Out ChatGPT Era Content
This connects to developers already filtering out post-ChatGPT content because quality degraded. If CLAUDE.md files become AI-generated—ironically, by Claude itself—we’re creating the same problem in code documentation. The solution isn’t rejecting AI assistance. It’s maintaining human oversight and data provenance tracking. Otherwise, documentation quality collapses the same way search results did.
Machine-First, Human-Second?
The trend extends beyond CLAUDE.md to broader “AI-native development” where codebases optimize for machine readability. Mintlify’s platform bills itself as “AI-native, built for collaboration,” integrating AI into how knowledge is “written, maintained, and understood by both users and LLMs.” Best formats for AI: Markdown, HTML, XML for text; CSV and JSON for structured data. Characteristics: clear structure, semantic richness, frequent headings, bulleted lists, defined terms, metadata.
ReadmeAI tools auto-generate documentation from codebase analysis, “freeing valuable developer time.” They tailor docs to user roles—junior versus senior engineers get different onboarding guides. Proponents argue that by improving README.md for AI, you inherently support both human developers and AI agents. The counterargument: this creates a two-tier system where knowledge is AI-accessible but not human-accessible.
Related: Vibe Coding Hangover: 10.3% of Apps Vulnerable in 2025
This raises an uncomfortable question: Are we optimizing for our tools or our teams? If documentation only makes sense when fed through Claude Code, what happens when the tool changes, the company switches AI assistants, or junior developers without AI access join? The philosophical shift from “explain to humans” to “feed to machines” might improve short-term productivity while creating long-term knowledge accessibility problems.
Documentation Was Always Broken
Here’s the controversial take: CLAUDE.md isn’t a problem—it’s proof traditional documentation failed. If AI can’t parse ambiguous, verbose README files, neither can humans. They just don’t admit it. Context engineering’s emphasis on conciseness (100-200 lines), structure (bullet points, clear headings), and specificity (project-specific patterns, not generic advice) creates better docs regardless of AI. The need for machine-readable documentation exposes that human-readable documentation was often human-incomprehensible.
Consider the best practice: “Short, declarative bullet points rather than long narrative paragraphs.” That improves clarity for everyone, not just AI. Community wisdom states, “If you find yourself repeating same instructions across sessions, it belongs in CLAUDE.md.” That forces documentation to be a living, version-controlled, testable infrastructure—not an afterthought. This is like .gitignore files for AI: natural evolution of tooling, not a concerning dependency.
The conciseness requirement (target 100-200 lines), structured format (headings, bullets), and specificity demands (project-specific, not generic) create documentation humans can actually use. CLAUDE.md might be the first time many projects have clear, maintained, useful docs. The controversy isn’t AI ruining documentation—it’s AI exposing documentation was already ruined.
Key Takeaways
- Context engineering matured from Karpathy’s “vibe coding” to systematic practice in 2025, with organizations seeing 55% faster development cycles and 40% fewer bugs through proper AI context management
- CLAUDE.md files force documentation clarity through conciseness (100-200 lines), structure (bullet points), and project-specific patterns—improvements that benefit humans as much as AI
- Model collapse threatens documentation quality if AI trains on AI-written content; solution requires tracking data provenance and maintaining human oversight, not rejecting AI assistance
- AI-native development optimizes for machine readability, raising questions about whether we’re serving our tools or our teams—documentation must remain accessible to both AI and humans
- The real lesson: If AI can’t parse your documentation, it probably wasn’t clear for humans either. Context engineering exposes that traditional README files were often ambiguous and poorly maintained.
Context engineering is here to stay. Documentation is infrastructure, not an afterthought. The balance needed: AI optimization that serves humans too. Track provenance to avoid model collapse. Fix documentation properly instead of blaming the tools that expose its flaws.










