Comment by root_axis
2 days ago
> Isn't consciousness an emergent property of brains
We don't know, but I don't think that matters. Language models are so fundamentally different from brains that it's not worth considering their similarities for the sake of a discussion about consciousness.
> how do we know that it doesn't serve a functional purpose
It probably does, otherwise we need an explanation for why something with no purpose evolved.
> necessary for an AI system to have consciousness
This logic doesn't follow. The fact that it is present in humans doesn't then imply it is present in LLMs. This type of reasoning is like saying that planes must have feathers because plane flight was modeled after bird flight.
> there's no reason to expect those aspects would emerge organically. But I don't think you can extend that to the entire concept of consciousness.
Why not? You haven't presented any distinction between "certain aspects" of consciousness that you state wouldn't emerge but are open to the emergence of some other unspecified qualities of consciousness? Why?
>This logic doesn't follow. The fact that it is present in humans doesn't then imply it is present in LLMs. This type of reasoning is like saying that planes must have feathers because plane flight was modeled after bird flight.
I think the fact that it's present in humans suggests that it might be necessary in an artificial system that reproduces human behavior. It's funny that you mention birds because I actually also had birds in mind when I made my comment. While it's true that animal and powered human flight are very different, both bird wings and plane wings have converged on airfoil shapes, as these forms are necessary for generating lift.
>Why not? You haven't presented any distinction between "certain aspects" of consciousness that you state wouldn't emerge but are open to the emergence of some other unspecified qualities of consciousness? Why?
I personally subscribe to the Global Workspace Theory of human consciousness, which basically holds that attentions acts as a spotlight, bringing mental processes which are otherwise unconscious or in shadow, to awareness of the entire system. If the systems which would normally produce e.g. fear, pain (such as negative physical stimulus developed from interacting with the physical world and selected for by evolution) aren't in the workspace, then they won't be present in consciousness because attention can't be focused on them.
> I think the fact that it's present in humans suggests that it might be necessary in an artificial system that reproduces human behavior
But that's obviously not true, unless you're implying that any system that reproduces human behavior is necessarily conscious. Your problem then becomes defining "human behavior" in a way that grants LLMs consciousness but not every other complex non-living system.
> While it's true that animal and powered human flight are very different, both bird wings and plane wings have converged on airfoil shapes, as these forms are necessary for generating lift.
Yes, but your bird analogy fails to capture the logical fallacy that mine is highlighting. Plane wing design was an iterative process optimized for what best achieves lift, thus, a plane and a bird share similarities in wing shape in order to fly, however planes didn't develop feathers because a plane is not an animal and was simply optimized for lift without needing all the other biological and homeostatic functions that feathers facilitate. LLM inference is a process, not an entity, LLMs have no bodies nor any temporal identity, the concept of consciousness is totally meaningless and out of place in such a system.
>But that's obviously not true, unless you're implying that any system that reproduces human behavior is necessarily conscious.
That could certainly be the case yes. You don't understand consciousness nor how the brain works. You don't understand how LLMs predict a certain text, so what's the point in asserting otherwise ?
>Yes, but your bird analogy fails to capture the logical fallacy that mine is highlighting. Plane wing design was an iterative process optimized for what best achieves lift, thus, a plane and a bird share similarities in wing shape in order to fly, however planes didn't develop feathers because a plane is not an animal and was simply optimized for lift without needing all the other biological and homeostatic functions that feathers facilitate. LLM inference is a process, not an entity, LLMs have no bodies nor any temporal identity, the concept of consciousness is totally meaningless and out of place in such a system.
It's not a fallacy because no-one is saying LLMs are humans. He/She is saying that we give machines the goal of predicting human text. For any half decent accuracy, modelling human behaviour is a necessity. God knows what else.
>LLMs have no bodies nor any temporal identity
I wouldn't be so sure about the latter but So what ? You can feel tired even after a full sleep, feel hungry soon after a large meal or feel a great deal of pain even when there's absolutely nothing wrong with you. And you know what ? Even the reverse happens - No pain when things are wrong with your body, wide awake even when you need sleep badly, full when you badly need to eat.
Consciousness without a body or hunger in a machine that does not need to eat is very possible. You just need to replicate enough of the sort of internal mechanisms that cause such feelings.
Go to the API and select GPT-5 with medium thinking. Now ask it to do any random 15 digit multiplication you can think of. Now watch it get it right.
Do you people not seriously understand what it is that LLMs do ? What the training process incentivizes ?
GPT-5 thinking figured out the algorithm for multiplication just so it could predict that kind of text right. Don't you understand the significance of that ?
These models try to figure out and replicate the internal processes that produce the text they are tasked with predicting.
Do you have any idea what that might mean when 'that kind of text' is all the things humans have written ?
2 replies →