Comment by qnleigh
3 hours ago
Well I'm not saying that LLMs are conscious; I'm just saying that I'm not super-confident either way.
To flesh this out a bit more, I agree that ability to communicate is not enough (ELIZA probably didn't pass the bar, even if it did kinda pass a Turing test). But that's also not what gives me pause with LLMs. It's how much information processing they seem to be doing under the hood.
It's really hard to imagine how next-word prediction could lead to consciousness, but I find it almost as hard to see why evolution did. If we can't even detect whether something has subjective experiences, then how can it have been selected for evolutionarily? The only possibility I see is that consciousness is a byproduct of some kinds of information processing tasks.* And if it's something that emerges naturally, then the line starts to get very blurry.
*This sounds reductive, but I don't at all mean it that way.
> but I find it almost as hard to see why evolution did.
Ignoring the concept of consciousness, it seems that self-awareness would be a strong attribute related to survival. It seems like it would help drive or amplify critical emotional states (e.g. my own survival, competition/success, love for self and relatives, etc.)
I can't see anywhere in the LLM machinery that would support the notion of self awareness in advance of the token selection process.
Possibly it could be argued that during token selection internal state is included and the result functionally looks like self awareness was included in the process, but that seems unconvincing.