Comment by Davidzheng

2 days ago

I mean if you have human without consciousness (if that is even possible) behaving in a statistically different distribution in text vs with. The machine will eventually be in distribution of the former from the latter because the text it's trained on is of the former category. So it serves a "function" in the LLM to minimize loss to approximate the former distribution.

Also I find it somewhat emotional distinction to write "predict sequences of text that resemble human writing" instead of "predict human writing". They are designed to predict (at least in pretraining) human writing for the most part. They may fail at the task, and what they produce is a text which resemble human writing. But their task is not to resemble human writing. Their task is to "predict human writing". Probably a meaningless distinction, but I find it somewhat detracts from logically arguments to have emotional responses against similarities of machines and humans.

> I mean if you have human without consciousness (if that is even possible) behaving in a statistically different distribution in text vs with. The machine will eventually be in distribution of the former from the latter because the text it's trained on is of the former category. So it serves a "function" in the LLM to minimize loss to approximate the former distribution.

Sorry, I'm not following exactly what you're getting at here, do you mind rephrasing it?

> Also I find it somewhat emotional distinction to write "predict sequences of text that resemble human writing" instead of "predict human writing"

I don't know what you mean by emotional distinction. Either way, my point is that LLMs aren't models of humans, they're models of text, and that's obvious when the statistical power of the model necessarily fails at some point between model size and the length of the sequence it produces. For GPT-1 that sequence is only a few words, for GPT-5 it's a few dozen pages, but fundamentally we're talking about systems that have almost zero resemblance to actual human minds.

  • I basically agree with you. In the first point I mean that if it is possible to tell whether a being is conscious or not from the text it produces, then eventually the machine will, by imitating the distribution, emulate the characteristics of the text of conscious beings. So if consciousness (assuming it's reflected in behavior at all) is essential to completing some text task it must be eventually present in your machine when it's similar enough to a human.

    Basically if consciousness is useful for any text task, i think machine learning will create it. I guess I assume some efficiency of evolution for this argument.

    Wrt length generalization. I think at the order of say 1M tokens it kind of stops mattering for the purpose of this question. Like one could ask about its consciousness during the coherence period.

    • I guess logically one needs to assume something like if you simulate the brain completely accurately the simulation is conscious too. Which I assume bc if false the concept seems outside of science anyway.

      1 reply →