• Home
  • Blog
  • AI News
  • Anthropic Reaches Settlement With Authors In AI Training Dispute

Anthropic Reaches Settlement With Authors In AI Training Dispute

Updated:August 27, 2025

Reading Time: 2 minutes
A robot and a business man shaking hands

Anthropic has settled a class action lawsuit with a group of authors who accused the company of using their books to train large language models without permission. 

The case, known as Bartz v. Anthropic, had drawn wide attention in both the publishing and technology industries. 

And although the settlement was disclosed in a filing on Tuesday with the Ninth Circuit Court of Appeals, the terms of the settlement remain confidential. 

Anthropic has not issued a public statement.

An illustration by Anthropic
Image Credit: Anthropic

Fair Use

Central to the lawsuit was the limit of the concept of fair use. The AI company argues that it is within legal means to rely on copyrighted works to train its models

However, the authors alleged that Anthropic used pirated copies of fiction and nonfiction books. They claimed this was both unlawful and harmful to their livelihoods.

The lower court ruling gave a split outcome. Judges held that training AI models with books qualified as fair use. 

However, because many of the works had been obtained from unauthorized sources, Anthropic faced potential financial penalties. Both sides had prepared to appeal. 

Authors’ Concerns

Many writers fear that their creative work, often produced over years, is being reduced to training material for systems that generate text in seconds.

Some authors describe it as seeing “entire novels copied and recycled by machines.” These anxieties have fueled similar lawsuits against other AI developers in recent months.

AI Training

Anthropic welcomed the earlier ruling, calling it a validation of its methods. In June, the company told NPR that its only purpose in acquiring books was to train language models. 

The court had backed its claims, agreeing to fair use. Even so, the case highlights the risks of relying on pirated or unauthorized sources. 

While the principle of fair use may shield certain practices, it does not protect against claims tied to the origins of the data. Other AI companies are likely to take note. 

Lolade

Contributor & AI Expert