×
Judge rules Anthropic’s book scanning for AI training is fair use
Written by
Published on
Join our daily newsletter for breaking news, product launches and deals, research breakdowns, and other industry-leading AI coverage
Join Now

Anthropic has scored a significant legal victory in an AI copyright case, with a federal judge ruling that training AI models on legally purchased books constitutes fair use. However, the company still faces a separate trial for allegedly pirating millions of books from the internet, creating a mixed outcome that could shape future AI copyright litigation.

The big picture: Judge William Alsup of the Northern District of California delivered a first-of-its-kind ruling favoring the AI industry, but with important limitations that distinguish between legitimate and illegitimate training practices.

What you should know: The ruling specifically covers Anthropic’s practice of purchasing physical books, digitizing them, and using those copies for AI training.

  • Anthropic physically purchased books, removed their bindings, cut the pages, and scanned them into a centralized digital library for training Claude AI models.
  • The judge determined that digitizing legally purchased books was fair use, and using those digital copies for LLM training was “sufficiently transformative.”
  • The decision does not address whether AI model outputs themselves infringe copyrights, which remains at issue in other related cases.

Why this matters: This ruling establishes a legal precedent that could influence how courts handle the growing number of AI copyright cases, while drawing clear boundaries around acceptable training practices.

The legal reasoning: Judge Alsup compared AI training to traditional education, arguing that copyright law should encourage competition rather than protect authors from it.

  • “Authors’ complaint is no different than it would be if they complained that training schoolchildren to write well would result in an explosion of competing works,” Judge Alsup writes.
  • He added that the Copyright Act “seeks to advance original works of authorship, not to protect authors against competition.”

Where Anthropic still faces trouble: The judge ruled that storing millions of pirated book copies—even if unused for training—does not qualify for fair use protection.

  • “This order doubts that any accused infringer could ever meet its burden of explaining why downloading source copies from pirate sites that it could have purchased or otherwise accessed lawfully was itself reasonably necessary to any subsequent fair use,” Alsup writes.
  • A separate trial will determine damages related to the pirated content allegations.

The case background: Writers Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued Anthropic last year, claiming the company trained its Claude AI models on pirated material without permission.

Anthropic wins a major fair use victory for AI — but it’s still in trouble for stealing books

Recent News

United uses AI to help passengers navigate tight airport layovers

AI guides travelers through airports with turn-by-turn directions while holding connecting flights.

MIT uses AI to document warming Gulf of Maine ecosystems

The region is warming faster than 99 percent of the world's oceans.

Startup plans to design AI-powered shoes in space using orbital satellite

Combining space technology with sneaker design to revolutionize sustainable manufacturing.