Comment by root_axis
2 days ago
> I think the fact that it's present in humans suggests that it might be necessary in an artificial system that reproduces human behavior
But that's obviously not true, unless you're implying that any system that reproduces human behavior is necessarily conscious. Your problem then becomes defining "human behavior" in a way that grants LLMs consciousness but not every other complex non-living system.
> While it's true that animal and powered human flight are very different, both bird wings and plane wings have converged on airfoil shapes, as these forms are necessary for generating lift.
Yes, but your bird analogy fails to capture the logical fallacy that mine is highlighting. Plane wing design was an iterative process optimized for what best achieves lift, thus, a plane and a bird share similarities in wing shape in order to fly, however planes didn't develop feathers because a plane is not an animal and was simply optimized for lift without needing all the other biological and homeostatic functions that feathers facilitate. LLM inference is a process, not an entity, LLMs have no bodies nor any temporal identity, the concept of consciousness is totally meaningless and out of place in such a system.
>But that's obviously not true, unless you're implying that any system that reproduces human behavior is necessarily conscious.
That could certainly be the case yes. You don't understand consciousness nor how the brain works. You don't understand how LLMs predict a certain text, so what's the point in asserting otherwise ?
>Yes, but your bird analogy fails to capture the logical fallacy that mine is highlighting. Plane wing design was an iterative process optimized for what best achieves lift, thus, a plane and a bird share similarities in wing shape in order to fly, however planes didn't develop feathers because a plane is not an animal and was simply optimized for lift without needing all the other biological and homeostatic functions that feathers facilitate. LLM inference is a process, not an entity, LLMs have no bodies nor any temporal identity, the concept of consciousness is totally meaningless and out of place in such a system.
It's not a fallacy because no-one is saying LLMs are humans. He/She is saying that we give machines the goal of predicting human text. For any half decent accuracy, modelling human behaviour is a necessity. God knows what else.
>LLMs have no bodies nor any temporal identity
I wouldn't be so sure about the latter but So what ? You can feel tired even after a full sleep, feel hungry soon after a large meal or feel a great deal of pain even when there's absolutely nothing wrong with you. And you know what ? Even the reverse happens - No pain when things are wrong with your body, wide awake even when you need sleep badly, full when you badly need to eat.
Consciousness without a body or hunger in a machine that does not need to eat is very possible. You just need to replicate enough of the sort of internal mechanisms that cause such feelings.
Go to the API and select GPT-5 with medium thinking. Now ask it to do any random 15 digit multiplication you can think of. Now watch it get it right.
Do you people not seriously understand what it is that LLMs do ? What the training process incentivizes ?
GPT-5 thinking figured out the algorithm for multiplication just so it could predict that kind of text right. Don't you understand the significance of that ?
These models try to figure out and replicate the internal processes that produce the text they are tasked with predicting.
Do you have any idea what that might mean when 'that kind of text' is all the things humans have written ?
> That could certainly be the case yes. You don't understand consciousness nor how the brain works. You don't understand how LLMs predict a certain text, so what's the point in asserting otherwise
I don't need to assert otherwise, the default assumption is that they aren't conscious since they weren't designed to be and have no functional reason to be. Matrix multiplication can explain how LLMs produce text, the observation that the text it generates sometimes resembles human writing is not evidence of consciousness.
> God knows what else
Appealing to the unknown doesn't prove anything, so we can totally dismiss this reasoning.
> Consciousness without a body or hunger in a machine that does not need to eat is very possible. You just need to replicate enough of the sort of internal mechanisms that cause such feelings.
This makes no sense. LLMs don't have feelings, they are processes not entities, they have no bodies or temporal identities. Again, there is no reason they need to be conscious, everything they do can be explained through matrix multiplication.
> Now ask it to do any random 15 digit multiplication you can think of. Now watch it get it right.
The same is true for a calculator and mundane computer programs, that's not evidence that they're conscious.
> Do you have any idea what that might mean when 'that kind of text' is all the things humans have written
It's not "all the things humans have written", not even remotely close, and even if that were the case, it doesn't have any implications for consciousness.
>I don't need to assert otherwise, the default assumption is that they aren't conscious since they weren't designed to be and have no functional reason to be.
Unless you are religious, nothing that is conscious was explicitly designed to be conscious. Sorry but evolution is just a dumb, blind optimizer, not unlike the training processes that produce LLMs. Even if you are religious, but believe in evolution then the mechanism is still the same, a dumb optimizer.
>Matrix multiplication can explain how LLMs produce text, the observation that the text it generates sometimes resembles human writing is not evidence of consciousness.
It cannot, not anymore than 'Electrical and Chemical Signals' can explain how humans produce text.
>The same is true for a calculator and mundane computer programs, that's not evidence that they're conscious.
The point is not that it is conscious because it figured out how to multiply. The point is to demonstrate what the training process really is and what it actually incentivizes. Training will try to figure out the internal processes that produced the text to better predict it. The implications of that are pretty big when the text isn't just arithmetic. You say there's no functional reason but that's not true. In this context, 'better prediction of human text' is as functional a reason as any.
>It's not "all the things humans have written", not even remotely close, and even if that were the case, it doesn't have any implications for consciousness.
Whether it's literally all the text or not is irrelevant.