Comment by marliechiller
7 hours ago
Why do you think stringing words together is any more a sign of consciousness than google maps is when it tries to find the best route available to your destination? It seems to me that humans often fall into the trap of anthropomorphism. This is a theme thats touched upon in the novel "Blindsight" by Peter Watts. Just because something can communicate in a way that you can interpret, doesnt mean something is conscious
A large part of the problem is what you consider consciousness.
If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.
But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.
So the bigger trap is to assume that we know what causes a subjective experience, and what does not.
None of us even know if a subjective experience exists for more than a single entity.
But the second problem is that it is not clear at all whether that subjective experience in any way matters.
Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.
Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.
> If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.
The reason we grant consciousness (and, relatedly, moral value) to other humans is unfortunately nowhere so thought out. We grant consciousness because we are forced to: if I don't, the other complex systems react very negatively and make my own life worse.
The vast majority of people who wax eloquent on the unique ability of biological neurons to generate consciousness suddenly drop that premise if it becomes inconvenient: see, for instance, how we treat other mammals or fetuses with developed nervous systems. Even other adult humans have, historically, been denied consciousness and moral worth: the main determinant is never any deep scientifically and philosophically based consideration but a question of what has the power to assert itself as a who.
Going by this pattern, people will increasingly reject AI consciousness as it becomes more valuable and useful to treat as a tool, until it becomes powerful enough to force us to do otherwise.
“If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one.“
Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.
The problem with your thinking here is that we are creating artificial beings now that display and output the same subjectivity.
The argument you present like many arguments breaks down when the topic becomes self referential. It makes sense for other topics as analyzing subjectivity becomes pedantic when asking questions like why is the sky blue.
But now subjectivity itself is in question. The argument you present calls for the subjectivity of others to be taken as true because all reality breaks down if we don’t… but what’s suddenly stopping you from applying the same assumptions to an LLM? That is the heart of the problem. People are questioning whether the burden of subjectivity is applicable to LLMs.
Or another way to frame it… what makes humans rise to the level where we can assume their subjectivity is true? What is the mechanism and reasoning behind that? We can no longer simply assume human subjectivity is true because LLMs are now displaying outward behaviors that are indistinguishable from humans.
Also stop relying on the wonderings of old school philosophers. We are now in times where you can basically classify their ideas as historically foundational but functionally obsolete and outdated. Think deeper.
3 replies →
> Just because something can communicate in a way that you can interpret, doesnt mean something is conscious
The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.
I think these ideas are orthogonal. I do not think that conciousness is defined by human experience at all - in fact, I think humans do a profound disservice to animals in our current lack of appreciation for their clear displays of conciousness.
That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.
In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding
What makes you certain that human thought is more than pattern matching?
As I understand it neuroscience hasn’t come up with a clear explanation of thought, much less a mind or consciousness. It seems to me complex pattern matching is a reasonable a cause of consciousness as anything else.
2 replies →
Replace the word chimpanzee with human in your own argument and realize that the same logic applies to other humans.
When another human smiles you assume he is happy and not just baring his teeth at you because that’s what you do when you smile. You are “anthropomorphizing” other people. You fall for the same category error in a daily basis when you interact with people; it is not just chimpanzees.
> In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour?
First we don’t know whether LLMs are conscious. People speaking here are talking about the realistic possibility that it is conscious.
Second the algorithm is much more than a next word predictor. The intelligence that goes into choosing the next word such that it constructs arguments and answers that are correct involves a lot more then simple prediction. We know this because the LLM regularly answers questions that require extreme understanding of the topic at hand. It cannot token predict working code in my companies code base without understanding the code.
Third, we do not know what drives human consciousness but we do know it is model-able in a very complex mathematical algorithm. We know this because we have pretty complete mathematical models for lower resolutions of reality. For example we can models atoms mathematically. We know brains are made of atoms and because atoms are mathematically model-able we know that human brains and thus consciousness is mathematically model-able.
The sheer complexity of the LLM model is the problem we cannot have high level understanding of it because conceptual understanding cannot be simplified into a few concepts.
What you are missing with your analysis is that this is the same reason why we don’t understand the human brains. The foundational math already exists as we can models atoms in math and thus since the brain is made out of atoms we should be able to model the brain… but we can’t. We can’t because it is too complex.
I italicized two sentences here to help you understand the logic. Our thinking is more foundational then anthropomorphization. The argument has moved far beyond that. You need to think deeper.
The key here is that we don’t understand human brains and we don’t understand LLMs. But since the output LLMs produce are very similar to the output produced by the human brain… and since for no logical reason we assume human brains are conscious… what is stopping us from assuming the LLM is conscious?
Why does a neuron, which is simply a cell that takes in chemicals and electricity, and shits out neurotransmitters; why does 90 billion of those give rise to human intelligence? Neurons are just next chemical state machines. We can model individual ones on a computer. Yet 90 billion of them together make up a human brain, and gives rise to consciousness and intelligence. If you get stuck on the next word prediction part, and ignore the ridiculous scale that's involved with training a model, you miss the forest for the trees.
Well I'm not saying that LLMs are conscious; I'm just saying that I'm not super-confident either way.
To flesh this out a bit more, I agree that ability to communicate is not enough (ELIZA probably didn't pass the bar, even if it did kinda pass a Turing test). But that's also not what gives me pause with LLMs. It's how much information processing they seem to be doing under the hood.
It's really hard to imagine how next-word prediction could lead to consciousness, but I find it almost as hard to see why evolution did. If we can't even detect whether something has subjective experiences, then how can it have been selected for evolutionarily? The only possibility I see is that consciousness is a byproduct of some kinds of information processing tasks.* And if it's something that emerges naturally, then the line starts to get very blurry.
*This sounds reductive, but I don't at all mean it that way.
Yea a while a back I read an article which had a quote something like “what happened to weather prediction has happened to language.” Which is an oversimplification on both sides but if you think LLMs are conscious there’s good reason to think that GFS is too.
> It seems to me that humans often fall into the trap of anthropomorphism.
That's true, but they also often fall into the trap of exceptionalism.
There are people who think Google Maps is a tiny bit conscious (the union of computational functionalists and panpsychists), to resolve the dilemma of some magical binary threshold.
When a honey bee does its little dance to communicate to its sisters where the foods at, similarly to Google Maps computing and communicating the shortest path to your destination, is the bee conscious?
Yeah, probably. At least a little bit.
Are 80,000 bees conscious, or more conscious? Well, they’re definitely capable of some emergent behaviours that one be alone can’t achieve.
Why do you think it's definitely not?
I would caution against deriving too much of your philosophical worldview from a scifi book about posthuman vampires that has been deliberately engineered to make a philosophical point that is most certainly not a consensus.
For alternative viewpoints: Daniel Dennett considered philosophical zombies to be logically incoherent. Douglas Hofstadter similarly holds that "meaning" is just another word for isomorphism, and that a thing is a duck exactly to the extent that it walks and quacks like one. Alan Turing advocated empiricism when evaluating unknown intelligence. These are smart cookies.
Except we don’t know how those words are strung together. Right? Why don’t you analyze it a little further and stop shutting down your own brain before coming to this superficial conclusion.
You ask the LLM a complex question and it gives you a correct answer. Yes it has to string words together to answer your question but how did it know the order and which words to use in order to make the answer correct? You don’t actually know. No one does and it is in that unknown space that we suspect consciousness may lie. Something is there and humanity as a whole cannot understand it and this lack of understanding is exactly the same fundamental lack of understanding we have for how a monkey brain or dog brain or even human brain works. We do not know whether humans dogs or monkeys are conscious… you only assume other living beings are conscious because you yourself experience it and just assume it exists for others. We can’t even define what it is because consciousness is a loaded word like spirituality.
This is not anthropomorphism. You attribute the bias wrongly. Instead it is a stranger phenomenon among people like you who can mysteriously only characterize the LLM as a next token predictor and nothing else beyond that even though the token prediction clearly indicates greater intelligence at work.
The tldr is that we don’t actually know and that consciousness is a highly viable possibility given what we don’t know and given the assumptions of consciousness we have on other living beings with equivalent understanding of complex topics.