Journalist Julia Angwin has filed a class-action lawsuit against Grammarly over a now-disabled feature that generated editorial feedback while impersonating well-known writers and experts.
The feature, called “Expert Review,” suggested that users could receive feedback similar to what they might hear from respected figures.
These included novelist Stephen King, scientist Carl Sagan, and technology journalist Kara Swisher.
However, the individuals whose names appeared in the system say they never granted permission.
The issue has thereby triggered legal action and also a debate about identity rights.
Expert Review
Grammarly introduced the “Expert Review” feature as part of its premium subscription offering.
The tool involved users uploading a document and receiving feedback generated by AI. The AI then produced comments written as if they came from specific experts or public figures.
At first, the concept seemed appealing. Many writers would welcome detailed feedback from respected authors or journalists.
In reality, the tool simulated those voices without their approval. The criticism came.
Grammarly’s feature has used the names and reputations of real writers and experts for commercial benefits, without consent.

Class-Action Lawsuit
One of the individuals included in the system was investigative reporter Julia Angwin. She is widely known for reporting on technology companies and digital privacy issues.
After learning that the tool used her name, Angwin filed a class-action lawsuit against Superhuman, the parent company behind Grammarly.
A class-action lawsuit allows other affected writers to join the case. Therefore, additional journalists and experts may participate if they believe their identities were used similarly.
Angwin expressed that she has spent decades refining her work as a writer and editor.
Consequently, discovering that a technology company created an artificial version of her expertise felt deeply troubling.
According to her statement, the feature effectively sold a simulated version of her professional judgment.
For many years, Angwin has investigated how technology companies handle personal data and privacy. Her reporting often examines the impact of digital systems on individual rights.
Now she has become directly involved in a dispute about those same issues. Other AI critics were also included in the feature.
One example is Timnit Gebru, a leading voice in discussions about responsible AI development.
Generic Feedback
Beyond the legal concerns, some tests suggested that the tool did not provide particularly insightful feedback.
Technology journalist Casey Newton decided to test the feature himself. Newton runs the technology newsletter Platformer and also appeared among the experts listed in the tool.
He uploaded one of his articles to the system. The AI then produced feedback written as if it came from Kara Swisher.
However, the response sounded generic. The system suggested adding clearer comparisons between AI users and AI skeptics so readers could better follow the argument.
The advice was reasonable, yet it lacked the distinctive style or strong viewpoint that Swisher is known for.
They questioned the purpose of using real experts’ identities if the comments remained so general.
Also read: The Washington Post Wants Amateur Writers who Use AI
Kara Swisher
Newton later shared the AI-generated feedback with the real Kara Swisher. In a message to Newton, Swisher criticized the company.
She accused Grammarly of stealing both information and identity. Her remarks quickly became quoted and circulated online.
A writer’s voice and reputation represent years of professional effort. Therefore, simulating that voice without permission is concerning.
Grammarly
Soon after, Grammarly disabled the “Expert Review” feature. Shishir Mehrotra confirmed the decision in a post on LinkedIn and issued an apology for the issues the tool created.
However, Mehrotra continued to defend the concept behind the feature. He suggested that AI could eventually provide expert-style guidance to millions of users.
The Rationale
According to Mehrotra, the idea behind the feature was to replicate the kind of advice people receive from mentors or experienced professionals.
In theory, such tools could give high-quality feedback, but using real identities without consent still crosses an ethical line.

