Comment by alphan0n
6 months ago
Simply put, if the model isn’t producing an actual copy, they aren’t violating copyright (in the US) under any current definition.
As much as people bandy the term around, copyright has never applied to input, and the output of a tool is the responsibility of the end user.
If I use a copy machine to reproduce your copyrighted work, I am responsible for that infringement not Xerox.
If I coax your copyrighted work out of my phones keyboard suggestion engine letter by letter, and publish it, it’s still me infringing on your copyright, not Apple.
If I make a copy of your clip art in Illustratator, is Adobe responsible? Etc.
Even if (as I’ve seen argued ad nauseaum) a model was trained on copyrighted works on a piracy website, the copyright holder’s tort would be with the source of the infringing distribution, not the people who read the material.
Not to mention, I can walk into any public library and learn something from any book there, would I then owe the authors of the books I learned from a fee to apply that knowledge?
> the copyright holder’s tort would be with the source of the infringing distribution, not the people who read the material.
Someone who just reads the material doesn't infringe. But someone who copies it, or prepares works that are derivative of it (which can happen even if they don't copy a single word or phrase literally), does.
> would I then owe the authors of the books I learned from a fee to apply that knowledge?
Facts can't be copyrighted, so applying the facts you learned is free, but creative works are generally copyrighted. If you write your own book inspired by a book you read, that can be copyright infringement (see The Wind Done Gone). If you use even a tiny fragment of someone else's work in your own, even if not consciously, that can be copyright infringement (see My Sweet Lord).
Right, but the onus of responsibility being on the end user publishing the song or creative work in violation of copyright, not the text editor, word processor, musical notation software, etc, correct?
A text prediction tool isn’t a person, the data it is trained on is irrelevant to the copyright infringement perpetrated by the end user. They should perform due diligence to prevent liability.
> A text prediction tool isn’t a person, the data it is trained on is irrelevant to the copyright infringement perpetrated by the end user. They should perform due diligence to prevent liability.
Huh what? If a program "predicts" some data that is a derivative work of some copyrighted work (that the end user did not input), then ipso facto the tool itself is a derivative work of that copyrighted work, and illegal to distribute without permission. (Does that mean it's also illegal to publish and redistribute the brain of a human who's memorised a copyrighted work? Probably. I don't have a problem with that). How can it possibly be the user's responsibility when the user has never seen the copyrighted work being infringed on, only the software maker has?
And if you say that OpenAI isn't distributing their program but just offering it as a service, then we're back to the original situation: in that case OpenAI is illegally distributing derivative works of copyrighted works without permission. It's not even a YouTube like situation where some user uploaded the copyrighted work and they're just distributing it; OpenAI added the pirated books themselves.
6 replies →
Those software tools don't generate content the way an LLM does so they aren't particularly relevant.
It's more like if I hire a firm to write a book for me and they produce a derivative work. Both of us have a responsibility for guard against that.
Unfortunately there is no definitive way to tell if something is sufficiently transformative or not. It's going to come down to the subjective opinion of a court.
8 replies →
How is the end user the one doing the infringement though? If I chat with ChatGPT and tell it „give me the first chapter of book XYZ“ and it gives me the text of the first chapter, OpenAI is distributing a copyrighted work without permission.
3 replies →
> As much as people bandy the term around, copyright has never applied to input, and the output of a tool is the responsibility of the end user.
Where this breaks down though is that contributory infringement is a still a thing if you offer a service aids in copyright infringement and you don't do "enough" to stop it.
Ie, it would all be on the end user for folks that self host or rent hardware and run an LLM or Gen Art AI model themselves. But folks that offer a consumer level end to end service like ChatGPT or MidJourney could be on the hook.
Right, strictly speaking, the vast majority of copyright infringement falls under liability tort.
There are cases where infringement by negligence that could be argued, but as long as there is clear effort to prevent copying in the output of the tool, then there is no tort.
If the models are creating copies inadvertently and separately from the efforts of the end users deliberate efforts then yes, the creators of the tool would likely be the responsible party for infringement.
If I ask an LLM for a story about vampires and the model spits out The Twilight Saga, that would be problematic. Nor should the model reproduce the story word for word on demand by the end user. But it seems like neither of these examples are likely outcomes with current models.
The piratebay crew was convicted of aiding copyright infringement. In that case you could not download derivates from their service. Now you can get verbatim text from the models that any other traditional publisher would have to pay license to print even a reworded copy of.
With that said, Creative Commons showed that copyright can not be fixed it is broken.