Comment by threethirtytwo
3 hours ago
Replace the word chimpanzee with human in your own argument and realize that the same logic applies to other humans.
When another human smiles you assume he is happy and not just baring his teeth at you because that’s what you do when you smile. You are “anthropomorphizing” other people. You fall for the same category error in a daily basis when you interact with people; it is not just chimpanzees.
> In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour?
First we don’t know whether LLMs are conscious. People speaking here are talking about the realistic possibility that it is conscious.
Second the algorithm is much more than a next word predictor. The intelligence that goes into choosing the next word such that it constructs arguments and answers that are correct involves a lot more then simple prediction. We know this because the LLM regularly answers questions that require extreme understanding of the topic at hand. It cannot token predict working code in my companies code base without understanding the code.
Third, we do not know what drives human consciousness but we do know it is model-able in a very complex mathematical algorithm. We know this because we have pretty complete mathematical models for lower resolutions of reality. For example we can models atoms mathematically. We know brains are made of atoms and because atoms are mathematically model-able we know that human brains and thus consciousness is mathematically model-able.
The sheer complexity of the LLM model is the problem we cannot have high level understanding of it because conceptual understanding cannot be simplified into a few concepts.
To understand the LLM requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the LLM.
What you are missing with your analysis is that this is the same reason why we don’t understand the human brains. The foundational math already exists as we can models atoms in math and thus since the brain is made out of atoms we should be able to model the brain… but we can’t. We can’t because it is too complex.
To understand the human brains requires simultaneous understanding of likely billions of concepts at the same time and how all the weights interact in the human brain.
I italicized two sentences here to help you understand the logic. Our thinking is more foundational then anthropomorphization. The argument has moved far beyond that. You need to think deeper.
The key here is that we don’t understand human brains and we don’t understand LLMs. But since the output LLMs produce are very similar to the output produced by the human brain… and since for no logical reason we assume human brains are conscious… what is stopping us from assuming the LLM is conscious?
No comments yet
Contribute on Hacker News ↗