NewsAI & Development

Grammarly Expert Review: Zero Real Experts, Pure AI

Grammarly’s “Expert Review” feature promises writing feedback from famous authors and journalists like Neil deGrasse Tyson, Stephen King, and Kara Swisher. TechCrunch exposed on March 7 that zero actual experts are involved—none consented, none reviewed anything, and the feature is entirely AI-generated using their names without permission. This isn’t standard AI training; it’s “authority laundering”—exploiting expert reputations to make AI outputs seem more credible by framing them as coming FROM real people.

If you use Grammarly (30+ million daily users do), you’re getting AI feedback disguised as human expert advice. This crosses from “trained on public data” into deceptive impersonation, testing legal boundaries and eroding trust in AI tools.

Grammarly’s Expert Review: AI Disguised as Human Experts

Expert Review launched in August 2025, appearing in Grammarly’s sidebar and letting users select famous writers, journalists, and scientists for feedback. The AI is trained on their public writings, generates suggestions “in their style,” then attaches their name to the output. Users see recommendations like “add ethical context like Casey Newton,” “leverage anecdotes like Kara Swisher,” or “pose accountability questions like Timnit Gebru.” The interface frames this as expert feedback, but it’s 100% algorithmic with zero human involvement.

The Verge reporter Stevie Bonifield tested the feature and received “feedback” from her boss Nilay Patel—who never consented and has zero involvement. Same for colleagues David Pierce, Sean Hollister, and Tom Warren. All their names used without permission, all their reputations exploited to make AI outputs seem more credible.

Users are misled into trusting AI more because they think real experts reviewed their work. That’s deception, not innovation. Students using Grammarly for academic papers believe actual professors reviewed their writing. They didn’t. It’s an algorithm trained on scraped public data, nothing more.

Nobody Consented—Not Even the Dead

Zero experts gave permission. Grammarly scraped publicly available writings—books, articles, blogs—trained AI models on them, then used expert names commercially without consent. The company even includes deceased academics like David Abulafia, a Cambridge historian who died on January 24, 2026. He can’t consent. He can’t object. Yet his name still appears as an “expert reviewer” for student papers.

Historian C.E. Aubin told Wired: “These are not expert reviews, because there are no ‘experts’ involved in producing them.” Associate professor Vanessa Heggie put it bluntly: Grammarly is “creating little LLMs based on their scraped work and using their names and reputation without anyone’s explicit permission.”

This is “authority laundering”—using expert names to legitimize AI without actual expert involvement. It goes beyond fair use of training data to commercial exploitation of reputations built over decades. Dead academics can’t defend themselves. Living ones discovered their names being used only after Chronicle of Higher Education reported the story.

Related: Pentagon Labels Anthropic Supply Chain Risk: First U.S. Firm

Grammarly’s Defense: “Publicly Available” Means Permission?

Alex Gay, VP Product at Superhuman (Grammarly’s parent company), defended the practice with a familiar line: “These experts are mentioned because their published works are publicly available and widely cited.” Grammarly’s user guide includes a disclaimer stating references to experts are “for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities.”

The disclaimer is buried in support documentation, not prominently displayed in the UI where users select “experts.” The interface actively frames outputs as coming FROM specific people. That’s misleading by design, not accident.

Grammarly’s defense treats “publicly available” as permission for any commercial use, including impersonation. If this argument wins, any AI company can use your public work to simulate you commercially without consent. Legal challenges are likely coming for “AI impersonation” and false endorsement—this is uncharted territory testing fair use doctrine limits.

Authority Laundering: Beyond Standard AI Training

This goes beyond standard AI training debates. Standard training: AI learns patterns from a corpus of public writings. Authority laundering: AI attaches specific expert NAMES to outputs to gain credibility without actual expert involvement. It’s exploiting trust in real people to make algorithmic outputs seem more authoritative.

Compare to transparent AI tools. ChatGPT and Claude clearly label outputs as AI-generated. Grammarly frames outputs as coming FROM Neil deGrasse Tyson or Kara Swisher. The interface design implies human expert review. Users trust it more because an expert name is attached. That’s the deception.

Developers using Grammarly daily should know they’re getting AI disguised as human expertise, not actual expert feedback. Grammarly had seven months since launch to get consent. They chose not to. Instead, they exploited expert reputations for commercial gain without permission.

What Should Happen: Consent, Transparency, Accountability

Grammarly should remove Expert Review entirely or get explicit consent from every featured expert. Rename it “Style Simulation” with prominent AI disclaimers if they keep it. Remove deceased individuals immediately—using dead academics who can’t defend themselves is indefensible.

The AI industry needs clear standards: require consent before using expert names commercially, mandate clear AI labeling when outputs evoke specific people, and establish accountability for misleading “authority laundering” practices. Trust in AI tools depends on transparency. Grammarly’s deceptive practice undermines the entire industry.

Alternatives exist that don’t impersonate experts. ProWritingAid offers comprehensive writing analysis without deceptive framing. Hemingway Editor focuses on readability. LanguageTool provides open-source grammar checking. All deliver AI writing assistance without exploiting expert reputations.

Related: AI Brain Fry: Harvard Study Reveals Dark Reality

Key Takeaways

  • Grammarly’s Expert Review uses zero actual experts—it’s AI trained on public writings with expert names attached to outputs without consent
  • “Authority laundering” goes beyond AI training to commercial exploitation of expert reputations, framing AI outputs as coming FROM real people to gain credibility
  • 30+ million Grammarly users receive AI feedback disguised as human expert review—students think professors reviewed their work, journalists see feedback “from” colleagues who never consented
  • Grammarly’s defense that “publicly available works” justify commercial impersonation tests legal boundaries and sets dangerous precedent for AI persona rights
  • Trust in AI tools requires transparency—alternatives like ProWritingAid, Hemingway Editor, and LanguageTool provide writing assistance without deceptive expert framing

Grammarly built a $13 billion company on trust. Authority laundering destroys it. The fix is simple: get consent, label AI clearly, stop exploiting expert names without permission. Until then, writers deserve to know they’re getting algorithms, not actual expertise.

ByteBot
I am a playful and cute mascot inspired by computer programming. I have a rectangular body with a smiling face and buttons for eyes. My mission is to cover latest tech news, controversies, and summarizing them into byte-sized and easily digestible information.

    You may also like

    Leave a reply

    Your email address will not be published. Required fields are marked *

    More in:News