AI and Copyright: The Anthropic and Meta Cases Compared

In August 2024, authors Andrea Bartz, Charles Graeber, and Kirk Wallace Johnson sued Anthropic PBC, claiming the company used millions of copyrighted books to train Claude without permission. The case became one of the most closely watched AI copyright disputes in federal court. Judge Alsup's June 2025 ruling found that using copyrighted books to train Claude was fair use, but ruled that downloading millions of books from shadow libraries like LibGen violated copyright law.
On August 27, 2025, the parties reached a settlement agreement under which Anthropic agreed to pay $1.5 billion to approximately 500,000 works, or $3,000 per book, and destroy all unauthorised copies from shadow libraries. However, at the September 8 hearing, Judge Alsup delayed preliminary approval, questioned the settlement's transparency and indicated he might proceed to trial.
Different Approaches to Fair Use
The Anthropic case closely mirrors Kadrey v Meta, where thirteen authors sued Meta Platforms Inc. for downloading their copyrighted books from the same shadow libraries and using these works to train Meta's Llama models without permission. Discovery revealed that Meta used datasets containing 7.5 million pirated books and 81 million research papers.
The judges applied different approaches to fair use doctrine. Judge Chhabria in Kadrey v Meta treated downloading and training as parts of a single, transformative use. He concluded that Meta's acquisition of copyrighted works could not be separated from their ultimate purpose in developing Llama's capabilities.
Judge Alsup in Bartz v. Anthropic used a more detailed analysis. He distinguished between Anthropic's creation and storage of a permanent library of pirated works, and the later use of specific materials for AI training. Judge Alsup found that while training was transformative, building and maintaining a comprehensive collection of unlawfully obtained works was a separate, non-transformative use that could not be justified by the eventual training purpose.
Judge Chhabria noted that the authors failed to provide sufficient evidence of actual market damage, focusing instead on weaker arguments about potential reproduction and hypothetical licensing market disruption. In contrast, the Bartz v. Anthropic authors succeeded by identifying Anthropic's clear infringement in building its pirated library.
The Meta Decision's Narrow Scope
Judge Chhabria's decision was not a broad approval of unauthorized scraping. He stated that in most cases involving conduct like Meta's, such activities would likely be illegal. The court noted that copying protected works to train AI models without permission would usually be unlawful, because such copying creates products that can seriously harm the market for original works and reduce incentives for human creativity.
Judge Chhabria explained that his decision focused on the plaintiffs' weak arguments rather than approving the fair use defense. He had "no choice but to" grant summary judgment for Meta due to these deficiencies. Since the case was not a class action, the ruling only affected the thirteen named plaintiffs, not other authors whose works Meta used.
Most importantly, Judge Chhabria clearly stated that his ruling did not mean that Meta's use of copyrighted materials to train its models was legal. Rather, it showed that the named plaintiffs presented the wrong arguments and failed to build a proper record supporting their claims. The rise of artificial intelligence in legal contexts, especially around copyright law, is a complex challenge for the courts, as this case illustrates.
Any Questions?
Connect with lawyers and seek expert legal advice
Share
Find by Article Category
Browse articles by categories
Featured Partnership
She Knows Best
Anonymous Advice, For Women By Women
Related Articles

10 Frequently Asked Questions on the UA…
What is the Family Foundation Exemption under UAE Corporate Tax Law? …

10 Frequently Asked Questions on the UAE Family F…
What is the Family Foundation Exemption und…

The OpenEvidence - Doximity Dispute: AI…
Two of the largest AI-driven healthcare platforms are clashing in court. OpenEv…

The OpenEvidence - Doximity Dispute: AI in Health…
Two of the largest AI-driven healthcare platforms…

FTC Probes AI Chatbots: Child Safety Un…
The Federal Trade Commission announced on September 11, 2025, that it issued or…

FTC Probes AI Chatbots: Child Safety Under Scruti…
The Federal Trade Commission announced on Septemb…