Comment by qnleigh
6 hours ago
It's easy, and very tempting to dismiss this sort of thing. But given how little we know about the human brain, let alone consciousness, I don't see how we can be confident that LLMs aren't conscious.
I've had a lot of thoughts and conversations over the years that changed my mind on what consciousness likely requires. One was the realization that a purely mechanical computer can, in principle simulate the laws of physics, and with it a human brain. So with a few other mild assumptions, you might conclude that a bunch of gears and pullies can be conscious, which feels profoundly counterintuitive.
I think that was the moment I stopped being sure about anything related to this question.
Why do you think stringing words together is any more a sign of consciousness than google maps is when it tries to find the best route available to your destination? It seems to me that humans often fall into the trap of anthropomorphism. This is a theme thats touched upon in the novel "Blindsight" by Peter Watts. Just because something can communicate in a way that you can interpret, doesnt mean something is conscious
A large part of the problem is what you consider consciousness.
If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.
But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.
So the bigger trap is to assume that we know what causes a subjective experience, and what does not.
None of us even know if a subjective experience exists for more than a single entity.
But the second problem is that it is not clear at all whether that subjective experience in any way matters.
Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.
Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.
“If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one.“
Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.
3 replies →
> Just because something can communicate in a way that you can interpret, doesnt mean something is conscious
The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.
I think these ideas are orthogonal. I do not think that conciousness is defined by human experience at all - in fact, I think humans do a profound disservice to animals in our current lack of appreciation for their clear displays of conciousness.
That said, if a chimpanzee bares its teeth to me, I could interpret that to be a smile when in fact its a threatening gesture. Its this misinterpretation that I am trying to get at. The overlaying of my human experiences onto something which is not human. We fall for this over and over again, likely as we are hard wired to - akin to mistakenly seeing eyes when observing random patterns in nature.
In the case of LLMs though, why does using a mathmatical formula for predicting the next word give any more credence to conciousness than an algorithm which finds a nearest neighbour? To me, its humans falling foul of false pattern matching in the pursuit of understanding
4 replies →
Yea a while a back I read an article which had a quote something like “what happened to weather prediction has happened to language.” Which is an oversimplification on both sides but if you think LLMs are conscious there’s good reason to think that GFS is too.
> It seems to me that humans often fall into the trap of anthropomorphism.
That's true, but they also often fall into the trap of exceptionalism.
There are people who think Google Maps is a tiny bit conscious (the union of computational functionalists and panpsychists), to resolve the dilemma of some magical binary threshold.
When a honey bee does its little dance to communicate to its sisters where the foods at, similarly to Google Maps computing and communicating the shortest path to your destination, is the bee conscious?
Yeah, probably. At least a little bit.
Are 80,000 bees conscious, or more conscious? Well, they’re definitely capable of some emergent behaviours that one be alone can’t achieve.
Why do you think it's definitely not?
I would caution against deriving too much of your philosophical worldview from a scifi book about posthuman vampires that has been deliberately engineered to make a philosophical point that is most certainly not a consensus.
For alternative viewpoints: Daniel Dennett considered philosophical zombies to be logically incoherent. Douglas Hofstadter similarly holds that "meaning" is just another word for isomorphism, and that a thing is a duck exactly to the extent that it walks and quacks like one. Alan Turing advocated empiricism when evaluating unknown intelligence. These are smart cookies.
Except we don’t know how those words are strung together. Right? Why don’t you analyze it a little further and stop shutting down your own brain before coming to this superficial conclusion.
You ask the LLM a complex question and it gives you a correct answer. Yes it has to string words together to answer your question but how did it know the order and which words to use in order to make the answer correct? You don’t actually know. No one does and it is in that unknown space that we suspect consciousness may lie. Something is there and humanity as a whole cannot understand it and this lack of understanding is exactly the same fundamental lack of understanding we have for how a monkey brain or dog brain or even human brain works. We do not know whether humans dogs or monkeys are conscious… you only assume other living beings are conscious because you yourself experience it and just assume it exists for others. We can’t even define what it is because consciousness is a loaded word like spirituality.
This is not anthropomorphism. You attribute the bias wrongly. Instead it is a stranger phenomenon among people like you who can mysteriously only characterize the LLM as a next token predictor and nothing else beyond that even though the token prediction clearly indicates greater intelligence at work.
The tldr is that we don’t actually know and that consciousness is a highly viable possibility given what we don’t know and given the assumptions of consciousness we have on other living beings with equivalent understanding of complex topics.
The mechanistic view gets weirder if you imagine all the states of the system being written down on a giant tape. Not just the "current" state but all the past and future states. What makes this tape not alive or conscious?
You could push the analogy even further and run the thought experiment where every forward pass through an LLM could in principle be done on pen and paper, distributed throughout all humanity. Sure it would take a long time, but the output would be exactly the same. We’ve just shifted the implementation from GPU to scribbling things down on paper. If you want to assert that LLMs are “conscious” then you would have to likewise say this pen-and-paper implementation is conscious unless you want to say a certain clock-speed is a necessary condition for consciousness.
When we get complete neuronal connection maps (which we are getting close to for mice and humans will be done within a decade or two), we could in principle simulate a brain on a computer or on paper too. Unless you assert something magical like a "soul", these connections are what determine human consciousness. It is one thing to argue that LLMs don't resemble brains and if they could be "conscious" they wouldn't be conscious in the sense we are, but asserting that anything understandable can't be conscious won't age well.
the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case
the notion of consciousness being something an experience that other animals/humans share is entirely faith based.
the only person with evidence of ones consciousness is the person claiming they're conscious.
> the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case.
You're basing your premise on a lack of understanding[1], the GP's premise is based on an exact understanding[2].
You don't see the difference between your premise and the GP'S premise?
-----------------
[1] "We don't know how brains actually come up with the things they come up with, like consciousness"; IOW, we don't know what the secret ingredient is, or even if there is one.
[2] "We can mechanically do the following steps using 18th-century tech and come up with the same result as the LLM"; IOW, every ingredient in here is known to us.
We know the brain can be modeled by math (and therefore thought can be written down on paper).
We know because we have mathematical models for atoms. And we know the brain is made out of atoms therefore the brain is simply a mathematical model of interconnected atoms that form a specific structure called the brain.
Thusevery facet of macro (keyword) reality should be able to be written on paper and calculated. That goes for everything… from the emotions you feel to the internal forward pass of an LLM.
Can computers simulate all the laws, even theoretically? We don't have a final theory / unification of all the physics frameworks, so I'm not sure if that claim can be made. Ex: the standard model and gravity.
I think it is primary too easy to dismiss the option that Dawkins is way less scientific then he pretends to me and possible a quired minor form of ai psychosis.
Likely. I'm convinced 'AI psychosis' is a developmental phase that everyone is subject to. It just gets manifested in character unique ways. I think part of it is the result of an internal struggle AI evokes which leads to a new form of humbling no one is exempt from.
Conciseness itself has always seemed to me a silly concept. My whole life I have not come across a simple definition but many sophists pin their existence on it.
HN is full of experts who know despite lack of evidence. It’s the strangest thing because their confidence on this topic is completely authoritative despite total ignorance.
but that’s not science, right? Dawkins and his ilk cling to science as a cure for religion yet if we are to believe that our absence of understanding of consciousness means computers can be conscious then our absence of understanding of the universe means god may exist.
“Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?”