Comment by root_axis
1 day ago
> That could certainly be the case yes. You don't understand consciousness nor how the brain works. You don't understand how LLMs predict a certain text, so what's the point in asserting otherwise
I don't need to assert otherwise, the default assumption is that they aren't conscious since they weren't designed to be and have no functional reason to be. Matrix multiplication can explain how LLMs produce text, the observation that the text it generates sometimes resembles human writing is not evidence of consciousness.
> God knows what else
Appealing to the unknown doesn't prove anything, so we can totally dismiss this reasoning.
> Consciousness without a body or hunger in a machine that does not need to eat is very possible. You just need to replicate enough of the sort of internal mechanisms that cause such feelings.
This makes no sense. LLMs don't have feelings, they are processes not entities, they have no bodies or temporal identities. Again, there is no reason they need to be conscious, everything they do can be explained through matrix multiplication.
> Now ask it to do any random 15 digit multiplication you can think of. Now watch it get it right.
The same is true for a calculator and mundane computer programs, that's not evidence that they're conscious.
> Do you have any idea what that might mean when 'that kind of text' is all the things humans have written
It's not "all the things humans have written", not even remotely close, and even if that were the case, it doesn't have any implications for consciousness.
>I don't need to assert otherwise, the default assumption is that they aren't conscious since they weren't designed to be and have no functional reason to be.
Unless you are religious, nothing that is conscious was explicitly designed to be conscious. Sorry but evolution is just a dumb, blind optimizer, not unlike the training processes that produce LLMs. Even if you are religious, but believe in evolution then the mechanism is still the same, a dumb optimizer.
>Matrix multiplication can explain how LLMs produce text, the observation that the text it generates sometimes resembles human writing is not evidence of consciousness.
It cannot, not anymore than 'Electrical and Chemical Signals' can explain how humans produce text.
>The same is true for a calculator and mundane computer programs, that's not evidence that they're conscious.
The point is not that it is conscious because it figured out how to multiply. The point is to demonstrate what the training process really is and what it actually incentivizes. Training will try to figure out the internal processes that produced the text to better predict it. The implications of that are pretty big when the text isn't just arithmetic. You say there's no functional reason but that's not true. In this context, 'better prediction of human text' is as functional a reason as any.
>It's not "all the things humans have written", not even remotely close, and even if that were the case, it doesn't have any implications for consciousness.
Whether it's literally all the text or not is irrelevant.