← Back to context

Comment by lesuorac

7 months ago

For textual purposes it seems fairly transformative.

If you train a LLM on harry potter and ask it to generate a story that isn't harry potter then it's not a replacement.

However, if you train a model on stock imagery and use it to generate stock imagery then I think you'll run into an issue from the Warhol case.

Wasn't that just over an arrangement of someone else's photographs?

The nature of how they store data makes it not okay in my books. You massage the data enough and you can generate something that seems infringement worthy.

  • For closed models the storage problem isn't really a problem, they can be judged by what they produce not how they store it as you don't have access to the actual data. That said, open weight LLMs are probably screwed, if enough of the work remains in the weights such that they can be extracted (even if it's without even talking to the LLM) then the weight file itself represents a copy of the work that's being distributed. So enjoy these competent run-at-home models while you can, they're on track for extinction.

    • Why doesn’t this apply to humans? If I memorize something such that it can be extracted did I violate the law? It’s only if I choose to allow such extraction to occur then I’m in violation of the law right?

      So if I or an LLM simply doesn’t allow said extraction to occur, memorization and copying is not against the law.

      4 replies →