← Back to context

Comment by omnimus

6 days ago

I am not sure why you would think so. AFAIK we will see more what courts think later in 2025 but judging from what was ruled in Delaware in feb... it is actually very likely that LLMs use of material is not "fair use" because besides "how transformed" work is one important part of "fair use" is that the output does not compete with the initial work. LLMs not only compete... they are specifically sold as replacement of the work they have been trained on.

This is why all the lobby now pushes the govs to not allow any regulation of AI even if courts disagree.

IMHO what will happen anyway is that at some point the companies will "solve" the licensing by training models purely on older synthetic LLM output that will be "public research" (which of course will have the "human" weights but they will claim it doesnt matter).

What you are describing is the output of the LLM, not the model. Can you link to the case where a model itself was determined to be infringing?

It’s important that copyright applies to copying/publishing/distributing - you can do whatever you to copyrighted works by yourself.

  • I dont follow. The artists are obviously complaining about the output that LLMs create. If you create LLM and dont use it then yeah nobody would have problem with it because nobody would know about it…

    • In that case, public services can continue to try to fine tune outputs to not generate anything infringing. They can train on any material they want.

      Of course, that still won’t make artists happy, because they think things like styles can be copyrighted, which isn’t true.

      2 replies →