A viral Hacker News post this week delivered a message tech companies need: you fired your technical writers because AI could do it cheaper. Now your AI tools produce slop, developers waste hours debugging “almost right” code, and you’re paying engineers $92/hour to write Claude Skills—technical writing under a different name. You didn’t eliminate the work. You made it worse and more expensive.
Fabrizio Ferri Benedetti’s January 12 article “To Those Who Fired Tech Writers Because of AI” hit Hacker News frontpage with 206 points and 128 comments. Companies like Canva eliminated technical writing roles throughout 2025 as “internal teams started using generative AI for documentation.” The consequences emerged in 2026. This isn’t sentimental job defense—it’s about understanding what technical writers actually did (context curation) and why eliminating that function broke your AI tools.
What Tech Writers Really Do
Technical writers aren’t “people who write docs”—they’re context curators creating the structured, semantic foundation that makes AI tools work.
When you fired tech writers, you eliminated the people who decide what to document and what to exclude, structure information hierarchically, capture edge cases and nuance, and maintain consistency. The work didn’t disappear.
Claude Skills require structured markdown, naming conventions, semantic organization. Each Skill uses 30-50 tokens with progressive disclosure. This demands information architecture and clear technical communication—technical writing skills. Cursor rules need MDC format with YAML frontmatter, documented coding standards, consistent structure. This is documentation work.
RAG systems expose the dependency. Research shows “+40.5 EM points improvement from graph curation”—the bottleneck isn’t semantic understanding, it’s “the lack of structured reasoning frameworks.” AI needs humans to organize information logically. RAG requires semantic tagging, summaries for similarity search, graph operations creating “curated context.” All technical writing work.
Benedetti states plainly: “Work like Claude Skills, Cursor rules, all the semantic tagging that makes RAG work, is technical writing under a new name: context curation.” You fired the specialists, then wondered why quality collapsed.
The Developer Pain
Developers spend more time debugging “almost right” AI code than writing from scratch.
Stack Overflow’s 2025 survey: 45% cite “almost right” AI solutions as their #1 frustration. 66% spend more time fixing AI code than they save. Experienced developers took 19% MORE time on tasks using AI—despite expecting 24% gains. Over half said they’d be faster writing from scratch for complex work.
Debugging AI consumes 8-12 hours per developer monthly. At $92/hour: 10 hours per month across 50 developers = $46,000 monthly, $552,000 annually. You saved $1M eliminating 10 writers at $100K each. You lost $500K+ yearly in debugging time, plus code quality, shipping speed, and security costs.
The “70% problem”: AI produces 70% of working code fast, then struggles with the final 30%—the demo-to-production gap. That 30% takes longer to debug than writing correctly from the start.
Trust collapses despite rising adoption. Only 29% trust AI accuracy (down from 40%). 46% actively distrust AI (up from 31%). Yet 84% use it daily. Everyone uses AI. Nobody believes it.
AI Needs Human Context
The irony: companies adopted AI to replace writers, but AI needs high-quality human-curated context. You fired the context curators and broke your tools.
ChatGPT gets 52% of Stack Overflow answers wrong. Over 50 ICLR 2026 papers included hallucinated citations—reviewed by 3-5 experts each who missed the fakes. Google hired “AI Answers Quality” engineers in January 2026 because AI Overviews hallucinate and self-contradict.
GPT-4 hallucinates 3% with quality context. In specialized domains with poor context: 60-80%. The difference isn’t the model—it’s human-curated input quality.
“LLMs generate more hallucinations when trained on incomplete, biased, or low-quality datasets.” The bottleneck isn’t AI understanding—it’s absence of structured frameworks humans create.
Tom Johnson (Amazon technical writer): AI augments output by “approximately 1.5×” with skilled writers. Not 10×. Not replacement. 1.5× when humans validate and maintain oversight. Without skilled curators, AI amplifies garbage.
The Hidden Reassignment
You didn’t eliminate technical writing. You reassigned it to developers at higher cost who are worse at it.
Developers now write Claude Skills (structured markdown, semantic hierarchy, progressive disclosure), Cursor rules (documentation standards, YAML frontmatter), and RAG semantic tagging. Technical writing work under different names.
Cost: technical writer earns $100K, trained in information architecture and documentation strategy. Developer earns $100-200K plus $92/hour opportunity cost, untrained in technical communication, prefers coding over documenting.
You eliminated specialized roles, then paid generalists to do specialized work badly. Classic false economy. You shifted costs to more expensive, less effective functions.
Benedetti: “You fired the people who create high-quality context and wondered why your AI tools produce slop.” The work didn’t disappear. It just stopped being done well.
What You Should Have Done
The answer: augmentation, not elimination. Writers using AI tools deliver 1.5× output with quality oversight. That’s the actual ROI.
The model that works: Human-in-the-Loop. Writers manage process, AI generates drafts and handles routine work, humans validate and ensure coherence. “AI is a standard tool requiring technical writers to provide oversight, validation, and structure.”
Writers maintain strategy (what to document, what to exclude, level of detail). AI handles first drafts and routine updates. Human oversight catches hallucinations and maintains semantic structure.
Users expect AI features—conversational interfaces, intelligent search. Someone builds and curates that context. Writers with AI tools deliver features while preventing downstream productivity loss.
What fails: firing writers and assuming AI replaces them, expecting developers to document in spare time, generating docs with no verification.
You could have 1.5× output with AI-augmented writers. You 0.5× it by eliminating quality curators. Augment, don’t eliminate. If you cut already, hire back specialized roles or train developers in documentation.
The Lesson
AI amplifies the need for quality human input. “Documentation requires empathy and care—qualities exclusive to humans. Products need signal, not noise.”
Context curation, semantic structure, information architecture, documentation strategy—not optional overhead, but foundations making AI productive. If you don’t pay technical writers professionally, you pay developers more to do it badly.
The viral post resonated because it named what developers experienced: eliminating writers didn’t eliminate work. It fragmented work across teams, degraded quality, increased costs, and broke the AI tools justifying the layoffs.
This isn’t sentimentality. It’s understanding dependencies, calculating true costs, making sound decisions. Companies recognizing this maintain advantage through quality documentation, effective AI, productive developers. Those ignoring it pay hidden costs of “savings” never realized.










