← Back to context

Comment by pavon

6 months ago

The judge did use some language that analogized the training with human learning. I don't read it as basing the legal judgement on anthropomorphizing the LLM though, but rather discussing whether it would be legal for a human to do the same thing, then it is legal for a human to use a computer to do so.

  First, Authors argue that using works to train Claude’s underlying LLMs was like using
  works to train any person to read and write, so Authors should be able to exclude Anthropic
  from this use (Opp. 16). But Authors cannot rightly exclude anyone from using their works for
  training or learning as such. Everyone reads texts, too, then writes new texts. They may need
  to pay for getting their hands on a text in the first instance. But to make anyone pay
  specifically for the use of a book each time they read it, each time they recall it from memory,
  each time they later draw upon it when writing new things in new ways would be unthinkable.
  For centuries, we have read and re-read books. We have admired, memorized, and internalized
  their sweeping themes, their substantive points, and their stylistic solutions to recurring writing
  problems.

  ...

  In short, the purpose and character of using copyrighted works to train LLMs to generate
  new text was quintessentially transformative. Like any reader aspiring to be a writer,
  Anthropic’s LLMs trained upon works not to race ahead and replicate or supplant them — but
  to turn a hard corner and create something different. If this training process reasonably
  required making copies within the LLM or otherwise, those copies were engaged in a
  transformative use.

[1] https://authorsguild.org/app/uploads/2025/06/gov.uscourts.ca...

Yeah I see the point, but is still thing there is a differnce between human learning and machine learning creatively, see my post above connected to the parent.