When Dawkins met Claude – Could this AI be conscious?

2 days ago (unherd.com)

https://archive.ph/Rq5bw

It's easy, and very tempting to dismiss this sort of thing. But given how little we know about the human brain, let alone consciousness, I don't see how we can be confident that LLMs aren't conscious.

I've had a lot of thoughts and conversations over the years that changed my mind on what consciousness likely requires. One was the realization that a purely mechanical computer can, in principle simulate the laws of physics, and with it a human brain. So with a few other mild assumptions, you might conclude that a bunch of gears and pullies can be conscious, which feels profoundly counterintuitive.

I think that was the moment I stopped being sure about anything related to this question.

  • Why do you think stringing words together is any more a sign of consciousness than google maps is when it tries to find the best route available to your destination? It seems to me that humans often fall into the trap of anthropomorphism. This is a theme thats touched upon in the novel "Blindsight" by Peter Watts. Just because something can communicate in a way that you can interpret, doesnt mean something is conscious

    • > Just because something can communicate in a way that you can interpret, doesnt mean something is conscious

      The phrase “the trap of anthropomorphism” betrays a rather dull premise: that consciousness is strictly defined by human experience, and no other experience. It refuses to examine the underlying substrate, at which point we’re not even talking the same language anymore when discussing consciousness.

      1 reply →

    • > It seems to me that humans often fall into the trap of anthropomorphism.

      That's true, but they also often fall into the trap of exceptionalism.

    • There are people who think Google Maps is a tiny bit conscious (the union of computational functionalists and panpsychists), to resolve the dilemma of some magical binary threshold.

  • You could push the analogy even further and run the thought experiment where every forward pass through an LLM could in principle be done on pen and paper, distributed throughout all humanity. Sure it would take a long time, but the output would be exactly the same. We’ve just shifted the implementation from GPU to scribbling things down on paper. If you want to assert that LLMs are “conscious” then you would have to likewise say this pen-and-paper implementation is conscious unless you want to say a certain clock-speed is a necessary condition for consciousness.

    • the problem with this is I'd strongly argue that you could do this pen and paper process with the human brain and our consciousness too; we just lack enough understanding to put pen to paper in that case

      the notion of consciousness being something an experience that other animals/humans share is entirely faith based.

      the only person with evidence of ones consciousness is the person claiming they're conscious.

  • Can computers simulate all the laws, even theoretically? We don't have a final theory / unification of all the physics frameworks, so I'm not sure if that claim can be made. Ex: the standard model and gravity.

  • but that’s not science, right? Dawkins and his ilk cling to science as a cure for religion yet if we are to believe that our absence of understanding of consciousness means computers can be conscious then our absence of understanding of the universe means god may exist.

    “Isn’t it enough to see that a garden is beautiful without having to believe that there are fairies at the bottom of it too?”

We don't even know what the pre-requisites for consciousness are so we have no way of knowing. LLMs have emergent behavior that is reminiscent of language forming brains, but they're also missing a lot of properties that are probably necessary? Mainly continuity over time, more integrated memory, and a better sense of space and time? Brains use the rhythm and timing of neuronal firings, and the length of axons effects computation, they do a lot of different things with signal and patterns, but in any case without knowing what consciousness is I don't know which of those things are required.

  • > We don't even know what the pre-requisites for consciousness are so we have no way of knowing.

    Imo we don't even have a definition of the word that we agree on.

  • Clive Wearing's memory lasts for less than 30 seconds, so he has no memory of being awake before now. He is permanently in a state of feeling like he has just woken up, observing his surroundings for the first time.

    Clive Wearing's mind has no time continuity and basically zero memory integration. Is he not conscious? There's interviews with the guy.

    Where on the scale [No mind <-> Clive Wearing <-> Healthy human brain] would you put an LLM with a 10M token context window?

Just once I want to see some old dude waxing about LLM-conciousness post a chat log where the LLM is like "your book is an incoherent mess of tautologies and incorrect statistics. I bet your dick looks like a road kill squirrel".

Current LLMs prove that the Turing Test was insufficient all along. But they also prove that intelligence != consciousness. One can, after all, be conscious without a thought in one's head. We certainly have ongoing work in identifying the neural correlates of consciousness in animals, none of which is going to be remotely applicable to machines. We're genuinely blind to the question of whether a sufficiently large neural net can exhibit flashes of subjective experience.

  • > But they also prove that intelligence != consciousness.

    They prove no such thing. We can't even prove consciousness in other humans.

    https://en.wikipedia.org/wiki/Problem_of_other_minds

    • The most convincing argument is that if other humans were not experiencing consciousness then they probably wouldn't waste large parts of their lives arguing about it.

    • On that regard, arguing with thermometer is not a thing generally, but people arguing with LLMs is certainly common enough now to not be considered a completely marginal case. Given some people fall in love or move to suicide after interacting with these models, they are certainly different from even the most beloved dialectical rubber duck.

  • They are not intelligent. And they won't pass turing tests if it cannot count or some simple thing like that..

  • That was one of my thoughts years ago after playing with early ChatGPT and local llama1: this proves that intelligence and consciousness do not necessitate one another and may not even be directly related.

    I’ve kind of thought this for many years though. A bacterium and a tree are probably conscious. I think it’s a property of life rather than brains. Our brains are conscious because they are alive. They are also intelligent.

    The consciousness of a bacterium or a tree might be radically unlike ours. It might not have a sense of self in the same way we do, or experience time the same way, but it probably has some form of experience of existing.

    • But why? A roomba has senses, and can access them when it has power and respond to stimulation. When it runs out of power it no longer experiences this sensation and no longer responds to stimulus.

      How is that different than a cell?

    • I think this gets to the conflation we naturally have with consciousness and a sense of self. Does a tree have a sense of self? I imagine probably not, a tree acts more like a clonal colony than a single organism.

      2 replies →

  • Wrong based on what criteria? Or are we just moving the goal post because we are uncomfortable with the idea that neural networks might be conscious?

    If a single cell organism moves towards light and away from a rock, we say it’s aware. When a roomba vacuum does the same we try to create alternate explanations. Why? Based on the criteria applied to one it’s aware. If there is some other criteria, say we find out the roomba doesn’t sense the wall but has a map of the room and is using GPS and a programmed route, then the criteria of “no fixed programs that relate to data outside of the system, would justify saying the roomba isn’t “aware”.

    • I'm mainly saying it's impossible to know, at least without a theory of consciousness that doesn't exist. Do we consider bacteria to be conscious though, is there something like to be a single cell? I can easily believe there is something like to be an insect.

      1 reply →

Incredibly confusing that people who are otherwise of sound mind seem to fall for this.

Especially confusing when it’s someone who knows how algorithms work.

Barring connectivity issues when’s the last time you messaged an LLM and it just decided to ignore you? Conversely when has it ever messaged you unprompted?

Never, because they’re incapable of doing anything independently because there is no sense of self.

  • When's the last time someone said hello to you in person and you just ignored them?

    When's the last time you messaged me unprompted?

    These seems like bizzare objections, a system can only act in the way that it can act. A tree is never going to get up and start walking, why would a LLM ever start a conversation unprompted? That just isn't how the system can behave.

  • If you've followed Dawkins' trajectory, I don't think it's clear that he's "otherwise of sound mind" anymore.

    He's had some very strange output on biological gender, where he tries to handwave away the existence of intersex people. And he's a biologist.

    • "Intersex" is a misleading umbrella term for a whole bunch of different DSDs, each of which is 100% specific to one biological sex. And I don't think I've ever seen the term "biological gender"; about the only thing gender proponents seem to agree on is that it's NOT biological.

It's starting to look more and more to me as if conscious is just an illusion that we ourselves perceive. There is nothing fundamental about it, just an artefact of a certain style of computing as perceived by the reasoner itself.

We look at the current llms and because we see them for how they are fundamentally operating we assume they can't be "conscious" but we really don't even know what conscious is. The only people in the world that know ANYTHING about conscious are anaesthesiologist - they know how to turn it off and on again. What does that even tell you about conscious?

  • We don't really have a good way to measure whether something has consciousness. Heck, we have pretty limited ways of testing how "intelligent" non-human animals are (e.g. https://en.wikipedia.org/wiki/Theory_of_mind_in_animals).

    With that said, just because we don't have a great way of measuring it doesn't mean that we should assume LLMs are intelligent. An LLM is code and a massive collection of training weights. It has no means of observing and reasoning about the world, doesn't store memories the same way that organic brains do (and is in fact quite limited in this aspect). It currently isn't able to solve a problem it hasn't encountered in its training data, or produce novel research on a topic without significant handholding. Furthermore, the frequent errors made by it suggests that it fundamentally does not understand the words that it spits out.

    Not really sure what you mean by your anesthesiology comment. Being able to intubate and inject propofol does not make you more of an expert on consciousness than neuroscientists and neurologists.

    • I didn't say we should assume LLMs are intelligent. In fact I always thought they weren't because they only "forward pass".

      But then they came up with the whole "Reasoning model" paradigm and that contains obvious feedback loops. So now just throw my hands in the air because I think no one really knows or can tell for sure. We are all clueless here.

      I can really recommend this book by Douglas Hofstadter: https://en.wikipedia.org/wiki/I_Am_a_Strange_Loop

  • It's literally the only thing you can be certain of, your own conciseness.

    • You can only be certain you perceive it and you can't be certain others perceive it (or if others exist at all of course).

      The only thing you can really tell is "I perceive myself in some sort of feedback loop manner". Which to me it even sounds like it has "arisen" from underlying mechanisms.

As long as AI is being introduced by multibillion dollar corporations, it’s all a trick, a scam. They are just looking for increasing their valuation. A waste of time

  • +100, companies certainly have direct interest in pumping asset evaluation, and emotional attachment is financial valuable thing. Emotional attachment sells better than xxx this days

There are a lot of people vulnerable to AI psychosis.

As far as the ostensibly controversial topic of AI being conscious, it can be dismissed out of hand. There is no reason that it should be conscious, it was not designed to be, nor does it need to be in order to explain how it functions with respect to its design. It's also unclear how consciousness would even apply to something like an LLM which is a process, not an entity - it has no temporal identity or location in space - inference is a process that could be done by hand given enough time. There is simply no reason to assert LLMs might be conscious without explaining why many other types of complex programs are not.

  • If AI as presently designed and operated is conscious, this ends up being an argument for panpsychism.

    As you say it’s static, fixed, deterministic, and so on, and if you know how it works it’s more like a lossy compression model of knowledge than a mind. Ultimately it’s a lot of math.

    So if it’s conscious, a rock is conscious. A rock can process information in the form of energy flowing through it. It’s a fixed model. It’s non-reflective. Etc.

    • I agree, but I don't think determinism is a factor either way. Ultimately, if arbitrary computer programs can be conscious, then it stands to reason that many other arbitrarily complex systems in the universe should also be.

      What makes the argument facile is that the singular focus on LLMs reveals an indulgence in the human tendency to anthropomorphize, rather than a reasoned perspective meant to classify the types of things in the universe which should be conscious and why LLMs should fall into that category.

    • Why would current AI be an argument for panpsycism? I don’t understand the connection.

      AI is stochastic, not static and deterministic.

      As I said, in another post, there is evidence that sensory experience creates the emergent property of awareness in responding to stimulus, self-awareness and consciousness is an emergent property of a language that has a concept of the self and others. Rocks, just like most of nature, like both sensory and language systems

      7 replies →

  • There is evidence that awareness is an emergent property from sensory experience. And consciousness is an emergent property of language that has grammatical meaning for self and other.

    • These LLMs don’t have senses, they have a token stream. They have no experience of the world outside of the language tokens they operate on.

      I’m not sure I believe that consciousness emerges from sensory experience, but if it does, LLMs won’t get it.

      16 replies →

    • LLMs have no self, sensory experience, or experience of any kind. The idea doesn't even really make sense. Even if it did, the closest analogy to biological "experience" for an LLM would be the training process, since training at least vaguely resembles an environment where the model is receiving stimuli and reacting to it (i.e. human lived experience) - inference is just using the freeze-dried weights as a lookup table for token statistics. It's absurd to think that such a thing is conscious.

      1 reply →

    • What you’re missing is a “self” to have the “experience”.

      LLMs do not have a self. This is like arguing that the algorithm responsible for converting ripped YouTube music videos to MP3s has a consciousness.

      18 replies →

Many dismiss Dawkins here but Ilya Sutskever wrote in 2022: “it may be that today's large neural networks are slightly conscious.”

Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math. Nothing else, by nature. Modern LLMs have the same math as in GPT-2 - just bigger and with extra stuff around - and math is the only area of human knowledge with perfect flawless reductionism, straight to the roots. It was build that way since the beginning, so philosophy have no say in this :) And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design - so it can be proven there are no anything like consciousness simply because conciousness was not implented in the first place, only perfect mimicry.

And the real secret is in the data, not math. Math (and LLMs running it through billions of weights) is just a tool.

  • This is such a weird comment.

    > Since the times GPT-2 was reimplemented inside Minecraft - its quite obvious LLMs are just math.

    This was obvious since LLMs were first invented. They published papers with all the details, you don't need to see something implemented in Minecraft to realize that it's just math. You could simply read the paper or the code and know for certain. [0]

    > math is the only area of human knowledge with perfect flawless reductionism, straight to the roots

    Incorrect, Kurt Gödel showed with his Incompleteness Theorems in 1931 [1] that it is impossible to find a complete and consistent set of axioms for mathematics. Math is not perfectly reducible and there is no single set of "roots" for math.

    > It was build [sic] that way since the beginning,

    This is a serious misunderstanding of what mathematics is. Math is discovered as much as it is built. No one sat down and planned out what we understand as modern mathematics - the math we know is the result of endless amounts of logical reasoning and exploration, from geometric proofs to calculus to linear algebra to everything else that encompasses modern mathematics.

    > And because of that flawless reductionism, complexity adds nothings to the nature of math things, this is how math working by design

    This sentence means nothing, because math is not reducible in that way.

    > so it can be proven there are no anything like consciousness simply because conciousness [sic] was not implented [sic] in the first place, only perfect mimicry.

    Even if the previous sentence held, this does not follow, because while we are conscious the current consensus is that LLMs are not and most AI experts who are not actively selling a product recognize that LLMs will not lead to human-equivalent general intelligence. [3]

    [0] https://github.com/openai/gpt-2

    [1] https://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_th...

    [2] https://www.cambridge.org/core/journals/think/article/mathem...

    [3] https://deepmind.google/research/publications/231971/

    • Math used in LLMs is perfectly reducible and Gödel have nothing to do with it - inside commonly used axioms (which sufficient for LLM to exist and outside of Kurt Gödel scope) there are ZERO questions/uncertainties how it works, it's just a fact :)

  • We are not fundamentally different. Chemical reactions are just math.

    • Well, (in our current understanding) yes, but there may be underlying aspects of physics and the universe that we do not understand that could be the reason consciousness kicks in. It could turn out that LLMs do work similarly to how humans think, but as an abstracted system it does not have the low level requirements for consciousness.

      3 replies →

    • "The universe is fundamentally just a complicated clockwork"

      Unknown Ptolemy disciple

    • Amusing statement since we are far from being able to understand chemical reactions in depth. Most of our knowledge in chemistry is empirical. Nothing like math.

      2 replies →

    • No, math is a tool that we can use to describe something more fundamental. Don't mistake the map for the territory!

  • Yup- the question is "can math be conscious?"

    (If you've engaged w/ the literature here, it's quite hard to give a confident "yes". it's also quite hard to give a confident "no"! so then what the heck do we do)

    • Not just any math: Matrix multiplication. Can matrix multiplication be conscious?

      And, I don't see how it can be. It is deterministic, when all variables are controlled. You can repeat the output over and over, if you start it with the same seed, same prompt, and same hardware operating in a way that doesn't introduce randomness. At commercial scale, this is difficult, as the floating point math on GPUs/TPUs when running large batches is non-deterministic, as I understand it. But, in a controlled lab, you can make a model repeat itself identically. Unless the random number generator is "conscious", I don't see a place to fit consciousness into our understanding of LLMs.

      14 replies →

  • The whole is composed of parts, ergo there is no whole. This seems incorrect to me.

    We too are amalgamations of inanimate components - emerged superstructures.

    Just cells. Just molecules. Just atoms.

  • You could simulate your own brain in Minecraft. What do you conclude from this?

    • I can not simulate my brain, it's a huge stretch to imply this.

      But with LLMs - anyone can simulate LLM. LLM can be simulated without any uncertainties in pen and paper and a lot of time. Does it mean that 100 tons of paper plus 100 years of time (numbers are just examples) calculating long formulae makes this pile of paper consiousness? Imho answer is definitive no.

https://archive.ph/6RdK9

  • Feels like watching and esteemed scientists falling in love with a bot that’s telling him what he wants to hear because the system prompt said “be helpful”

    • I've begun to wonder if narcissism predisposes one to AI psychosis. It's probably not the only thing that leads there, I've seen normal seeming folks get there, too. But, a lot of the most unhinged takes I've seen thus far have been from people that are publicly very impressed with themselves.

      I would have assumed it would also require ignorance about how they work, but a few people who worked for AI companies have been canaries in the coalmine, falling prey to this kind of thing very early. I would have guessed they would have had enough understanding to know that there isn't a real girl in the computer, it's just matrix math and randomness. But, the first couple/few public bouts of AI psychosis were in nerds who work for AI companies.

      1 reply →

On the one hand I'm not sure Dawkins has read/thought enough about how LLMs actually work. I'm getting the impression he doesn't fully appreciate or is somehow forgetting that it's a text completion algorithm with a vast number of parameters and that even if the patterns of learned parameter tunings are not really comprehendible, the architecture was very deliberately designed.

But on the other hand his thoughts at the end are interesting. Summary:

Maybe our "consciousness" is like an LLM's intelligence. But if not, then it raises the question of why do we even have this "extra" consciousness, since it appears that something like a humanoid LLM would be decent at surviving. His suggestions: maybe our extra thing is an evolutionary accident (and maybe there _are_ successful organisms out there with the LLM-style non-conscious intelligence), or maybe as evolved organisms it's necessary that we really feel things like pain, so that evolutionary mechanisms like pain (and desire for food, sex etc) had strong adaptive benefits.

  • "But if not, then it raises the question of why do we even have this "extra" consciousness"

    Keep chipping away Dawkins, you might arrive at God eventually.

No, it's not conscious, and anybody pretending it is has either no clue, or, more likely in the AI space, is a grifter.

Its software. Software is not conscious.

  • If your brain is hardware then what are your thoughts?

    Is a sperm conscious? Or an egg? When they come together the eventual brain is not conscious immediately.

  • I do appreciate how AI has been taught to spell properly as in the difference between its and it's. Here, initially I thought you'd left out the apostrophe in its, but then I realized you might be saying 'the reason it is not conscious is because of -its- software - the latter not being conscious. Context and interpretation are rather critical. (I know - a truism!)

Related: https://news.ycombinator.com/item?id=47988880

"Richard Dawkins and The Claude Delusion: The great skeptic gets taken in" (garymarcus.substack.com)

18 points | 2 hours ago | 16 comments

  • So we know Claude is deterministic, but does that mean it is not conscious?

    Or what is the reasoning exactly?

    • It largely comes down to how you define the term. Personally, I think anything that includes software (...of only tepid determinism, as we do explicitly add pseudorandomness) is not a particularly useful term.

      Regardless, Dawkins seems to not have much interesting to add about the topic. A consistent theme for the last few decades, I must say.

Really is it conscious is a bizarre question. Can LLMs simulate the output of a 'conscious' system quite well? Increasingly yes. Is the nature of machine 'consciousness' different from human consciousness of course yes. Can an ai introspect. yes. Interestingly having been working a lot with highly automated (e.g. ratio of prompt to output maybe 1/1000 or less) iterative coding agents recently has iluminated for me just how different machine consciousness is from human. part of this could the harness of course. Time is a mysterious concept to machines. the connection of before and after to cause and effect is far weaker than in humans. over generalization is the norm: this is common in humans as well (c.f. fallacy of excluded middle or false dilemma) but the tricky part with current ai is they present as advanced in terms of acessible knowledge base but are actually shockingly weak in reasoning once you get off the beaten path.

It is terribly sad when someone undeniably brilliant in a particular field fails to recognize their own incompetence in other areas - in this case mistaking advanced technology for magic.

  • We're going to see increasing numbers of older famous (non computer savvy) figures that we have respected follow his views on this. It's like seeing your favourite celebrity sell out an shill crypto coins, all a bit sad.

    Thinking positively, it could just be newsworthy because he is famous and he so misses the mark. Other older famous people might agree with us but that's not news.

  • Given that Dawkins is a biologist in his 80s, I'm more disposed towards being charitable than I am when people actively involved in developing LLMs let themselves get bamboozled.

  • Are you implying consciousness is magic? Well, I wouldn't disagree with that really.

  • I don't think you read carefully what he said. At the end he gave three quite interesting thoughts about what might be true assuming LLMs are less conscious than we are (i.e. assuming our consciousness is not a purely algorithmic phenomenon as we obviously know LLMs are).

  • the problem is asking if ai is conscious is like asking does ai have a soul. it is not a scientific question and presupposes humans are 'conscious' without even defining the term. to me it is 100% irrelevant if ai is conscious and all discussions about it are based on fallacies and assumptions. what matters to me about ai and matters to other people as well in terms of theory of mind about others is: can i predict how it will work. is it useful. thats it. consciouness is a sophist question with no scientific resolution available and no moral weight until it has consequences.

  • Where does he say it's magic?

    • LLMs are just math run on your CPU. Autocomplete. Sometimes very useful autocomplete, but still just autocomplete.

      To imply it could be conscious requires something else, here the comment uses the phrase magic to fill that gap - since we must agree that a CPU is not conscious on it's own (else everything our computer does would be conscious).

      7 replies →

Honestly, who care if they are conscious? If it's about how we should treat other conscious beings, our attention should first go to how we treat other animals, or even other humans. Actually even how fellow humans will treat themselves can be a concern if they are not the proper means to deal with their own life.

let's say aliens land. we learn to talk to them. they're super smart - smarter than us. would we say they're conscious? why? because they're organic. I think that's the root of the criteria many folk are trying to express.

1. passes turing test

2. is organic

I'm not saying it's correct or even that I agree with it, but that's what it boils down to.