The AI writing tool offers feedback “in the style of” well-known thinkers and journalists, raising questions about attribution, credibility, and transparency.
Grammarly’s newest AI feature promises feedback inspired by some of the world’s most influential writers and thinkers.
But there’s one catch.
Those experts aren’t actually involved.
The feature, called Expert Review, suggests revisions to users’ writing “from the perspective” of prominent figures—ranging from academics to journalists. Critics say the result blurs the line between AI stylistic imitation and genuine expert commentary.
How Grammarly’s Expert Review Works
Introduced in August 2025, the feature appears in the sidebar of Grammarly’s writing assistant.
Users can request feedback framed as guidance from well-known voices.
Examples of the AI’s suggestions reportedly include prompts like:
- Add ethical context “like Casey Newton”
- Use anecdotal storytelling “like Kara Swisher”
- Ask broader accountability questions “like Timnit Gebru”
According to reporting from Wired and The Verge, the system sometimes references journalists from publications including:
- The Verge
- Wired
- Bloomberg
- The New York Times
But the people mentioned are not actually participating in the feature.
No Permission From the “Experts”
None of the writers or thinkers referenced appear to have approved the use of their names in the system.
Grammarly’s parent company Superhuman says the names appear simply because the individuals’ work is widely cited and publicly available.
Alex Gay, vice president of product and corporate marketing, explained that the system draws on publicly available writing and ideas when generating suggestions.
The company also includes a disclaimer in its user documentation.
“References to experts in Expert Review are for informational purposes only and do not indicate any affiliation with Grammarly or endorsement by those individuals or entities.”
Critics Say the Name Is Misleading
Even with the disclaimer, critics argue the feature’s branding could mislead users.
Historian C.E. Aubin summed up the concern in comments to Wired.
“These are not expert reviews, because there are no ‘experts’ involved in producing them.”
In other words, the feature may be closer to AI-generated stylistic mimicry than actual editorial feedback.
The Broader Debate Around AI Attribution
Grammarly’s feature touches on a growing issue across AI tools: how models reference or emulate real people.
Companies increasingly train systems on massive corpora of publicly available writing.
That creates difficult questions:
- When does inspiration become implied endorsement?
- Should AI tools be allowed to invoke real names in generated advice?
- How should companies credit or compensate creators whose work shapes AI outputs?
These debates echo similar controversies around AI-generated art, music, and journalism.
A Marketing Problem as Much as a Technical One
At its core, Grammarly’s feature may be less about technological capability and more about presentation.
AI systems can reasonably summarize rhetorical styles—like emphasizing ethical framing or narrative storytelling.
But presenting that feedback as coming “from” specific experts risks creating the impression of direct involvement or authority.
And for a tool marketed around writing credibility, that distinction matters.
TL;DR:
Grammarly’s Expert Review feature offers writing suggestions framed as advice from well-known journalists and thinkers—but none of those individuals are actually involved. Critics argue the feature is misleading, since it relies on AI-generated interpretations of public writing rather than real expert input.
AI Summary:
- Grammarly introduced Expert Review in August 2025.
- AI suggestions are framed as coming from well-known writers and thinkers.
- The individuals referenced are not involved in the feature.
- Grammarly says the names reflect publicly available work and citations.
- Critics argue the feature is misleading without real expert participation.








