← Back to context

Comment by throwanem

14 days ago

Language and speech comprehension and production is relatively well understood to be heavily localized in the left temporal lobe; if you care to know something whereof you speak (and indeed with what, in a meat sense), then you'll do well to begin your reading with Broca's and Wernicke's areas. Consciousness is in no sense required for these regions to function; an anesthetized and unconscious human may be made to speak or sing, and some have, through direct electrical stimulation of brain tissue in these regions.

I am quite confident in pronouncing first that the internal functioning of large language models is broadly and radically unlike that of humans, and second that, minimally, no behavior produced by current large language models is strongly indicative of consciousness.

In practice, I would go considerably further in saying that, in my estimation, many behaviors point precisely in the direction of LLMs being without qualia or internal experience of a sort recognizable or comparable with human consciousness or self-experience. Interestingly, I've also discussed this in terms of recursion, more specifically of the reflexive self-examination which I consider consciousness probably exists fundamentally to allow, and which LLMs do not reliably simulate. I doubt it means anything that LLMs which get into these spirals with their users tend to bring up themes of "signal" and "recursion" and so on, like how an earlier generation of models really seemed to like the word "delve." But I am curious to see how this tendency of the machine to drive its user into florid psychosis will play out.

(I don't think Hoel's "integrated information theory" is really all that supportable, but the surprise minimization stuff doesn't appear novel to him and does intuitively make sense to me, so I don't mind using it.)

Again, knowing that consciousness isn't required for language is not the same thing as knowing what consciousness is. We don't know what consciousness is in humans. We don't know what causes it. We don't even know how human brains do the things they do (knowing what region is mostly responsible for language is not at all the same as knowing how that region does is).

But also, claiming that because a human is anesthetized means they are not conscious is a claim that I think we don't understand consciousness well enough to make confidently. They don't remember it afterwards, but does that mean they weren't conscious? That seems like a claim that would require a more mechanistic understanding of consciousness than we actually have and is in part assuming the conclusion and/or mixing up different definitions of the word "conscious". (the fact that there are various definitions that mean things like "is a awake and aware" and "has an internal state/qualia" is part of the problem in these discussions.)

  • You said:

    > I will find these types of arguments a lot more convincing once the person making them is able to explain, in detail and with mechanisms, what it is the human brain does that allows it to [produce behavior comparable to that of LLMs], and in what ways those detailed mechanisms are different from what LLMs do.

    I addressed myself to those concerns, to which consciousness is broadly not relevant. Oh, conscious control of speech production exists when consciousness is present, of course; the inhibitory effect of consciousness, like the science behind where and how speech and language arise in the brain, is by now very well documented. But you keep talking about consciousness as though it and speech production had some essential association, and you are confusing the issue and yourself thereby.

    As I have noted, there exists much research in neuroscience, a good deal of it now decades old, which addresses the concerns you treat as unanswerable. Rather than address yourself further to me directly, I would suggest spending the same time following the references I already gave.

    • I'm talking about consciousness because that's what the parent comment was making claims about. They original claim was that LLMs are definitely not conscious. I responded that we don't understand consciousness well enough to make that claim. You responded that consciousness is not necessary for language. I do not dispute that claim but it's irrelevant to both the original comment and my reply. In fact, I agree, since I said that I think that LLMs are likely not conscious and they have obvious language ability, so I obviously don't think that language ability necessarily implies consciousness. I just don't think that that, alone, is enough to disprove their consciousness.

      You, and the research you advice I look into, is answering a totally different question (unless you are suggesting that research has in fact solved the question of what human consciousness is, how it works, etc, in which case, I would love you to point me in the direction so I can read more).

      3 replies →