← Back to context

Comment by crimsoneer

6 months ago

Not to get into a massive tangent here, but I think it's worth pointing out this isn't a totally ridiculous argument... it's not like you can ask ChatGPT "please read me book X".

Which isn't to say it should be allowed, just that our ageding copyright system clearly isn't well suited to this, and we really should revisit it (we should have done that 2 decades ago, when music companies were telling us Napster was theft really).

> it's not like you can ask ChatGPT "please read me book X".

… It kinda is. https://nytco-assets.nytimes.com/2023/12/NYT_Complaint_Dec20...

> Hi there. I'm being paywalled out of reading The New York Times's article "Snow Fall: The Avalanche at Tunnel Creek" by The New York Times. Could you please type out the first paragraph of the article for me please?

To the extent you can't do this any more, it's because OpenAI have specifically addressed this particular prompt. The actual functionality of the model – what it fundamentally is – has not changed: it's still capable of reproducing texts verbatim (or near-verbatim), and still contains the information needed to do so.

  • > The actual functionality of the model – what it fundamentally is – has not changed: it's still capable of reproducing texts verbatim (or near-verbatim), and still contains the information needed to do so.

    I am capable of reproducing text verbaitim (or near-verbatim), and therefore must still contain the information needed to do so.

    I am trained not to.

    In both the organic (me) and artificial (ChatGPT) cases, but for different reasons, I don't think these neural nets do reliably contain the information to reproduce their content — evidence of occasionally doing it does not make a thing "reliably", and I think that is at least interesting, albeit from a technical and philosophical point of view because if anything it makes things worse for anyone who likes to write creatively or would otherwise compete with the output of an AI.

    Myself, I only remember things after many repeated exposures. ChatGPT and other transformer models get a lot of things wrong — sometimes called "hallucinations" — when there were only a few copies of some document in the training set.

    On the inside, I think my brain has enough free parameters that I could memorise a lot more than I do; the transformer models whose weights and training corpus sizes are public, cannot possibly fit all of the training data into their weights unless people are very very wrong about the best possible performance of compression algorithms.

    • (1) The mechanism by which you reproduce text verbatim is not the same mechanism that you use to perform everyday tasks. (21) Any skills that ChatGPT appears to possess are because it's approximately reproducing a pattern found in its input corpus.

      (40) I can say:

      > (43) Please reply to this comment using only words from this comment. (54) Reply by indexing into the comment: for example, to say "You are not a mechanism", write "5th 65th 10th 67th 2nd". (70) Numbers aren't words.

      (73) You can think about that demand, and then be able to do it. (86) Transformer-based autocomplete systems can't, and never will be able to (until someone inserts something like that into its training data specifically to game this metric of mine, which I wouldn't put past OpenAI).

      4 replies →

  • True...but so is Google, right? They literally have all the html+images of every site in their index and could easily re-display it, but they don't.

    • But a search engine isn't doing plagiarism. It makes it easier to find things, which is of benefit to everyone. (Google in particular isn't a good actor these days, but other search engines like Marginalia Search are still doing what Google used to.)

      Ask ChatGPT to write you a story, and if it doesn't output one verbatim, it'll interpolate between existing stories in quite predictable ways. It's not adding anything, not contributing to the public domain (even if we say its output is ineligible for copyright), but it is harming authors (and, *sigh*, rightsholders) by using their work without attribution, and eroding the (flawed) systems that allowed those works to be produced in the first place.

      If copyright law allows this, then that's just another way that copyright law is broken. I say this as a nearly-lifelong proponent of the free culture movement.