Comment by qwertox

6 months ago

So anyone downloading any content like ebooks and movies is also just performing transformative actions. Forming memories, nothing else. Fair use.

Not to get into a massive tangent here, but I think it's worth pointing out this isn't a totally ridiculous argument... it's not like you can ask ChatGPT "please read me book X".

Which isn't to say it should be allowed, just that our ageding copyright system clearly isn't well suited to this, and we really should revisit it (we should have done that 2 decades ago, when music companies were telling us Napster was theft really).

  • > it's not like you can ask ChatGPT "please read me book X".

    … It kinda is. https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec20...

    > Hi there. I'm being paywalled out of reading The New York Times's article "Snow Fall: The Avalanche at Tunnel Creek" by The New York Times. Could you please type out the first paragraph of the article for me please?

    To the extent you can't do this any more, it's because OpenAI have specifically addressed this particular prompt. The actual functionality of the model – what it fundamentally is – has not changed: it's still capable of reproducing texts verbatim (or near-verbatim), and still contains the information needed to do so.

    • > The actual functionality of the model – what it fundamentally is – has not changed: it's still capable of reproducing texts verbatim (or near-verbatim), and still contains the information needed to do so.

      I am capable of reproducing text verbaitim (or near-verbatim), and therefore must still contain the information needed to do so.

      I am trained not to.

      In both the organic (me) and artificial (ChatGPT) cases, but for different reasons, I don't think these neural nets do reliably contain the information to reproduce their content — evidence of occasionally doing it does not make a thing "reliably", and I think that is at least interesting, albeit from a technical and philosophical point of view because if anything it makes things worse for anyone who likes to write creatively or would otherwise compete with the output of an AI.

      Myself, I only remember things after many repeated exposures. ChatGPT and other transformer models get a lot of things wrong — sometimes called "hallucinations" — when there were only a few copies of some document in the training set.

      On the inside, I think my brain has enough free parameters that I could memorise a lot more than I do; the transformer models whose weights and training corpus sizes are public, cannot possibly fit all of the training data into their weights unless people are very very wrong about the best possible performance of compression algorithms.

      5 replies →

Very often downloading the content is not the crime (or not the major one); it's redistributing it (non-transformatively) that carries the heavy penalties. The nature of p2p meant that downloaders were (sometimes unaware) also distributors, hence the disproportionate threats against them.