I wouldn't call it that. Goldsmith took a photograph of Prince which Warhol used as a reference to generate an illustration. Vanity Fair then chose to buy a license Warhol's print instead of Goldsmith's photograph.
So, despite the artwork being visual transformative (silkscreen vs photograph) the actual use was not transformed.
The nature of how they store data makes it not okay in my books. You massage the data enough and you can generate something that seems infringement worthy.
For closed models the storage problem isn't really a problem, they can be judged by what they produce not how they store it as you don't have access to the actual data. That said, open weight LLMs are probably screwed, if enough of the work remains in the weights such that they can be extracted (even if it's without even talking to the LLM) then the weight file itself represents a copy of the work that's being distributed. So enjoy these competent run-at-home models while you can, they're on track for extinction.
For textual purposes it seems fairly transformative.
If you train a LLM on harry potter and ask it to generate a story that isn't harry potter then it's not a replacement.
However, if you train a model on stock imagery and use it to generate stock imagery then I think you'll run into an issue from the Warhol case.
Wasn't that just over an arrangement of someone else's photographs?
https://en.wikipedia.org/wiki/Andy_Warhol_Foundation_for_the...
I wouldn't call it that. Goldsmith took a photograph of Prince which Warhol used as a reference to generate an illustration. Vanity Fair then chose to buy a license Warhol's print instead of Goldsmith's photograph.
So, despite the artwork being visual transformative (silkscreen vs photograph) the actual use was not transformed.
The nature of how they store data makes it not okay in my books. You massage the data enough and you can generate something that seems infringement worthy.
For closed models the storage problem isn't really a problem, they can be judged by what they produce not how they store it as you don't have access to the actual data. That said, open weight LLMs are probably screwed, if enough of the work remains in the weights such that they can be extracted (even if it's without even talking to the LLM) then the weight file itself represents a copy of the work that's being distributed. So enjoy these competent run-at-home models while you can, they're on track for extinction.
6 replies →
I wonder if https://en.wikipedia.org/wiki/Illegal_number comes into play here.
What's the steelman case that is transformative? Because prima-facie, it seems to only output original output - "intelligent" output.