← Back to context

Comment by simmerup

7 months ago

Depends whether you actually agree its transformative

For textual purposes it seems fairly transformative.

If you train a LLM on harry potter and ask it to generate a story that isn't harry potter then it's not a replacement.

However, if you train a model on stock imagery and use it to generate stock imagery then I think you'll run into an issue from the Warhol case.

  • Wasn't that just over an arrangement of someone else's photographs?

  • The nature of how they store data makes it not okay in my books. You massage the data enough and you can generate something that seems infringement worthy.

    • For closed models the storage problem isn't really a problem, they can be judged by what they produce not how they store it as you don't have access to the actual data. That said, open weight LLMs are probably screwed, if enough of the work remains in the weights such that they can be extracted (even if it's without even talking to the LLM) then the weight file itself represents a copy of the work that's being distributed. So enjoy these competent run-at-home models while you can, they're on track for extinction.

      6 replies →

What's the steelman case that is transformative? Because prima-facie, it seems to only output original output - "intelligent" output.