← Back to context

Comment by pyrale

9 months ago

Since there is no such thing as training rights, they would have a reasonable claim.

I think it is more reasonable for content owners to say what can and cannot be done with their data. After all, content is what make AI possible, and content owners could easily start their own LLM if they wanted to since a lot of it is open source now.

  • You're taking an "everything not permitted is forbidden" approach, which contradicts the common law principle of residual freedom.

    This would automatically outlaw any new use of information (eg music sampling) by default.

    If all novel uses were banned from the outset, cultural progress would suffer immeasurably.

    • I don't think cultural progress will suffer from copyright holders preventing AI from using their content.

      What I think will suffer more is the bank accounts of AI corporations.

      3 replies →

  • they are not content "owners" though. they have a a copyright that regulates who can copy and distribute that data. they don't have a say how that content is used when acquired legally as long as you activity doesn't constitute a distribution.

  • >I think it is more reasonable for content owners to say what can and cannot be done with their data.

    They lose that right as soon as they sell it to other people.

    No, you can't sell a book to someone and then sue anyone who reads the book, upside down.

    That would be ridiculous. If you don't want someone reading your book upside down, or training on it, then don't sell books.

    • You assume that "training" and human learning are similar things.

      This is a bit like saying that taking a holiday picture of someone, and putting a surveillance camera on the street are the same thing.

      I think many books actually prohibit the storage into an information retrieval system and AI can be considered a form of that.

      3 replies →