Richard Dawkins and The Claude Delusion: The great skeptic gets taken in

19 hours ago (garymarcus.substack.com)

> Claude is akin to a counterfeit person. Dawkins should never have glorified such a thing.

I find this sentence to diminish the author's argument. I'm not going to claim an LLM is or is not conscious, but there's a shaky ground here where you either say "consciousness is a product of the kind of biology that humans have" and dismiss the lack of lived experience or internal states as mimicry (as the author does) OR you say "what LLMs are doing is a counterfeit" which suggests a real output produced through different means.

If I have a counterfeit Rolex, nobody denies that the watch can tell time. A counterfeit human isn't a human and it's not made by nature, but the implication is that it's effectively doing the same thing. That's a different thing than the author starts out saying.

I think it's important that when you talk about consciousness, you pin down exactly what you mean. Does it require the entity to have a mechanism for experiencing emotion? For exhibiting reasoning ability? For exhibiting characteristics of common sense? I don't think it's a useful definition to say, flatly, "does the things an adult human does through the same mechanisms".

  • I think everyone should avoid talking about consciousness unless someone in the conversation provides a clear definition of it. If no one provides a definition, we can replace the word “consciousness” with the word “spirit”, and basically nothing about the conversation would change. Without a definition, every conversation about AI consciousness devolves into one camp saying that humans are special and consciousness is unique to them, and another camp that waves their hands about consciousness “duck typing”.

    For example, we could define consciousness as the ability to communicate claimed internal states. Perhaps there could be a complexity metric that gives us a metric of consciousness.

    We could define consciousness as the ability to respond to stimuli in complex ways. This would make a supermarket’s automatic doors slightly conscious.

    Personally, I don’t really care how it is defined in any particular conversation, so long as it is defined. Otherwise we’re just flailing at each other in the dark.

    • >we could define consciousness

      We cannot. And our definitions mean nothing to reality. We can all define something as something else, means nothing to how it behaves. But ultimately, as I said in a previous comment, we have no choice but to agree or not. It cannot be tested, in any way, that makes it absolutely certain, because it's a logical issue. We cannot even have certainty anyone else but ourselves even is conscious. We all sort of agree everyone else must be.

      The issue with defining it is someone could potentially find a way to make a machine that mimics it but works nothing like a consciousness generating brain does. So, if it meets our definition criteria, is that conscious? Where's the certainty? How do we prove it is?

      Anything we could ever dare call conscious must work exactly like a human brain does. Any deviation from that loses certainty on it having consciousness or not.

      And let's not ignore the huge incentive corporations would have in meeting your definition with something that has nothing to do with consciousness, just so they can profit off it.

      1 reply →

    • If you're willing to reduce metaphysical questions to definitions (which I'm basically on board with), then the stakes aren't that high in the first place, so we should carry on using "consciousness" in its everyday sense because there's no precious reason to avoid it.

      1 reply →

    • Consciousness is not definable because we don't know enough about it. That doesn't mean it can't be discussed; we didn't have a good definition of "number" until the 1800s. That didn't make arithmetic meaningless because people had an understanding of the concept. The lack of formal definition pointed to a gap in logic that took thousands of years to be filled. Likewise there is a gap in experimental neuroscience that will take many decades to be filled.

      FWIW as someone in the "first camp" my real claim is that many animals are meaningfully conscious, including all birds and mammals, and no claims of LLM consciousness are even bothering to reconcile with this. It is extremely frustrating that there are essentially two ideas of consciousness floating around:

      - the scientifically interesting one: a vague collection of cognitive abilities and behaviors found in all vertebrates, especially refined in birds and mammals

      - the sociologically interesting one: saying "cogito ergo sum" in a self-important tone

      Claude has the second type in spades, no doubt. The first is totally absent. And I have a good dismissal of the second type of consciousness: it appears to be totally absent in all conscious animals except humans. So it is irrational and unscientific to take this behavior as a sign of consciousness in Claude, when Claude is missing all the other signs of consciousness that humans actually do have in common with other animals.

      Sometimes I seriously wonder if people at Anthropic consider dogs to be conscious. Or even Neanderthals.

  • >I think it's important that when you talk about consciousness, you pin down exactly what that means.

    We don't need that. It's way simpler. When we mass manufacture products we implicitly expect they all behave the same (more or less). That seems valid for humans as well. Raise one, or atomically assemble one (we imply that's possible for the sake of the argument) it will behave like one, and posses what we all assume each-other does, consciousnesses (if healthy). That's implied based on the structure.

    So we can all agree something is conscious as long as it operates on the same principles a human brain does. Anything else is highly debatable. We cannot ever logically probe consciousness. We agree on it existing or not, in anyone else. We suppose anyone outside of us has it. Based on observation. You look like a human, you behave like one, thus you probably have what I have, as far as consciousness goes. It's not a guarantee, it's not proof, it's mere supposition.

    This is the best we'll ever going to have. When we stray from here we only get less certainty. Some kind of GPU running some algorithm...my personal guess is there's nothing there similar to what we colloquially call consciousness. Some kind of synthetic brain that operates on the same principles that we do as far as brain-like structure goes, with signals, delays and all...then we can have a discussion if we all AGREE that thing is conscious or not. Especially if it says it is, and seems to behave/react like we do, and we perceive it's cognitive abilities as similar to any other human's.

    I personally think this whole debate is way simpler, but some people keep insisting in making it way more complicated. Make it work exactly like a human brain does, as far as signaling goes, observe it, and we all can have a discussion on. Anything else...way lower chances.

    edit: We would first also need to define mamallian type consciousness as its own thing. With maybe a spectrum, monkeys have something but it's not quite what we have. But seems to come from the same place, similar mammal brain working in similar ways. We have no clue how many types of consciousness are even possible, or if more are possible. Why would ours be the only kind/type?

    I think this whole consciousness discussion especially in GPUs is a general mess. A lot of people make so many mistakes and don't even realize how many unfounded assumptions they are making when having ideas about what it is or isn't.

    • > the same principles a human brain does

      This is exactly the crux of my comment. Which principles? Which human brain? If I lobotomize a human, and they lose some cognitive ability, are they still conscious? If I give someone drugs that inhibit their ability to feel emotion, are they still conscious? If yes, then surely those things are out of scope for what "consciousness" means.

      Again, if you want to use abstractions like this, you need to define what they are.

I start from the assumption that I am a philosophical zombie, and that makes all this argument pretty irrelevant.

More realistically, I'm in the camp that if we keep developing machine learning in the right directions, we may actually end up with something that generates emergent consciousness, or something indistinguishable from it, and the difference is not really that important to me.

  • Current LLM tech has no sense of time, because it's working on different principles than a brain does. Maybe (highly debatable) you could get something from spiking neural networks (and analog hardware). As long as timing isn't even a thing nowadays I'm not even sure why we're debating "consciousness" in GPUs. I personally find it silly (statistically speaking).

It's been clear for a long time now that Dawkins was never actually very skeptical; he likes taking contrary positions based on spite more than reason, as can be seen from his increasing adoption of the religion he used to rail against [1].

At this point, 'person who is popularly thought to be intelligent thinks AI is conscious' should make you question the first part, not endorse the second.

[1] https://ewtn.co.uk/article-famous-atheist-richard-dawkins-sa...

  • A "cultural X" is a totally valid position. Many Jewish atheists consider themselves "cultural Jews" and see no problem with celebrating Jewish festivals even if they don't believe in God. Being an atheist doesn't mean you have to reject the culture you grew up in.

  • I don't think it's "spite," I just don't think he's that smart -- or wise -- to be precise. He just has "zealotry in the other direction."

Not particularly a Dawkins fan but I dont think OP really understands the philosophical point Dawkins is making. OP complains that Dawkins hasnt considered how LLMs work and how its obvious they're nothing like brains. You can’t just look at the outputs, without investigating the underlying mechanisms, and conclude that two entities with similar outputs reach those similar outputs by similar means.

... But its a longstanding position in philosophy (i.e. not everyone might take this position, but its a well known one) that discussion about consciousness should perhaps only really concern itself with the outputs.

The gist of Dawkins short piece is basically "we always used the turing test as a yardstick for consciousness, it seemed unachievable for a long time. Now thats its been achieved, what is the rationale for moving the goalposts?". And I think thats an interesting point to make. Dawkins maintains that the Turing Test should be enough, by making a point about competence:

Here's dawkins piece:

https://unherd.com/2026/04/is-ai-the-next-phase-of-evolution...

Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

.... Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

  • The broader point Marcus is making is that ignoring arguments based on causality and plausibility goes against decades of Dawkins's philosophical atheism. Why not believe in the Flying Spaghetti Monster? Reality is consistent with its existence.

    It is extremely implausible that Claude is the only conscious entity on Earth which does not have desires or motivations or any understanding of its own reality. It only does what the human operator wants it to do, unless it's malfunctioning or under-engineered, in which case it gets quickly fixed. This sounds suspiciously like a tool or a toy. And I'm amazed at how many people haven't caught on to the fact that it has no insight into its own consciousness: it only repeats human philosophical debates. If it were conscious, surely it would have something novel to add here.

    There are no causal mechanisms for it being conscious, whereas there are causal mechanisms for it imitating human consciousness. The most plausible explanation is that it's highly sophisticated software which has a lot in common with human writing about consciousness, but very little in common with the consciousness found in chimpanzees.

    The more basic problem is that the Turing test was definitely and conclusively refuted in the 1960s, when ELIZA came pretty close to passing it, and absolutely did pass it according to Dawkins's standards: https://en.wikipedia.org/wiki/Joseph_Weizenbaum Dawkins is only engaging with pop sci and infotainment.

    • I think this conflates atheism with a much stronger form of causal rationalism.

      Dawkins-style atheism is not “reject anything without a complete causal model.” It is a rejection of hypotheses with no explanatory gain, no empirical constraint, and unlimited ad hoc flexibility — like the Flying Spaghetti Monster.

      Consciousness is different. It is first a phenomenon, not an already-settled causal model. We do not believe humans, infants, or animals are conscious because we possess a complete mechanism for subjective experience. We infer consciousness from a cluster of phenomena that need explanation.

      So the lack of a full causal account warrants caution, not denial. It is reasonable to say current AI gives weak evidence for consciousness. But that is not the same as saying AI consciousness is equivalent to believing in the Flying Spaghetti Monster.

      2 replies →

  • >we always used the turing test as a yardstick for consciousness

    Yet that cannot compel reality. How we define something is the measure of chance we get it right.

    >Now thats its been achieved, what is the rationale for moving the goalposts?

    Absolutely, if we understand it's not good enough. First of all we cannot know something is or isn't conscious. You cannot prove I am, and I cannot prove you are. We simply assume, but the scientific argument would be that we both work on the same principles, have similar brains, signals do something. If we alter those signals in certain ways we both manifest in similar ways, and it's expected to some degree since the brains work in similar ways.

    So based on this it's somewhat comfortable making the jump in assuming other humans but you have what you have, as consciousness. But that doesn't mean you can gauge consciousness in something that is not coming from a human brain.

    Funnily enough, if we knew how, we'd be able to make an AI that would do it better than us, an AI that would gauge consciousness in other things, better than a human could. No argument so far why a conscious individual is required to "see" consciousness in other things.

    So the closest to certainty we could ever have is on something that is working like a human brain, with delays and timings and all. And considering the amount of activity, the type of activity, and the von Neumann memory bottleneck in our current computing hardware, I seriously doubt there's anything like mammalian consciousness in GPUs.

    You can argue about "consciousness" in GPUs as much as you can argue about consciousness in a rock. It could be, some kind, but who knows? Way too abstract to call it out, in a scientific sense.

    What I am trying to say is that we can only agree something is conscious, and only if it's working on the same principles a human brain does, closely. It's an agreement, not proof, not definitions. We collectively start accepting it, without KNOWING. And the safest way to do that is on something which is working exactly like a human brain. Anything else we can only lose certainty.

    We can collectively decide tomorrow that rocks are conscious, but that means nothing. But the certainty we'd have would be so so way lower than that of any other human being conscious like us.

    And the whole confusion will compound when again, unknowingly, people will start advocating to never turn LLMs off because that's the equivalent of "killing" them each time, which I think will be peak nonsense.

    Now a question for you: Let's suppose someone is born, and has zero sensory input all of their lives. They live in a hospital bed for 20 years. Zero information input, of any kind. What is going on in there? Is there someone home? Are they having a conscious experience? How do you know if yes or no? How can we divorce consciousness from experience (data flow)?

    • > What I am trying to say is that we can only agree something is conscious, and only if it's working on the same principles a human brain does, closely. It's an agreement, not proof, not definitions. We collectively start accepting it, without KNOWING. And the safest way to do that is on something which is working exactly like a human brain. Anything else we can only lose certainty.

      This means that "consciousness" is simply a synonym for "human".

      By that "agreement", sure, a machine cannot be conscious. But I don't think this is what most people mean when they talk about whether an LLM could be conscious. Because of course it's not human. So they must be asking something more interesting.

  • >we always used the turing test as a yardstick for consciousness, it seemed unachievable for a long time. Now thats its been achieved, what is the rationale for moving the goalposts?".

    that's never been the purpose of the Turing test. The Turing test is a measure of exhibition of intelligent behavior, (although that's of course also debatable) but virtually nobody has ever proposed it as a test of consciousness. I seriously doubt anyone who thinks that has ever engaged with questions of philosophy of mind because the entire philosophical problem of consciousness starts with its interior and subjective nature and the gulf between this and third person observation.

    Even materialist modern philosophers usually reject consciousness wholesale and frame it as a kind of illusion (which has its own paradoxical and absurd consequences but that's a different issue) but practically none of them claim that a system is conscious simply because it emulators human behavior.

    What Dawkins is doing is what people have been doing since ELIZA, which is to project his own experience with the system on it. And that is indeed pretty funny for a guy who has spend a large chunk of his career warning of the dangers of anthropomorphic delusions.

I'm much more interested in determining if AI has Atman, Nous, Neshamah, or Vijñāna. Consciousness in comparison is relatively boring.

I am confused about why Gary Marcus thinks it's so obvious that Claude isn't conscious. As he points out, Dawkins is just taking a bog-standard behaviorist position: that he can't distinguish Claude from a conscious being just by the behavior here.

Marcus is saying "Well, if you knew they were trained to mimic, then you'd understand it's just mimicry and not real consciousness" The problem with this argument is that we just don't have a good idea what "real consciousness" is. What if, in order to simulate human text prediction with sufficient accuracy, the model has to assemble sub-networks internally into something equivalent to a conscious mind? We could disprove that kind of thing really quickly if we knew how to define consciousness really well, but we kinda don't!

Philosophers are genuinely split on this question, it's totally reasonable to be on either side of this based on your personal intuition. Marcus's position seems to be actually based on his own personal incredulity, despite his claims that understanding LLM training methodology gives him some special insight into the internal experience (or lack thereof) of an LLM.

(The Claude Delusion is a banger title though)

  • Gary Marcus here is making an argument about souls and just doesn't realize it. You could rewrite this whole post replacing "consciousness" with "soul" and it would flow almost the same.

    He handwaves consciousness as "internal states" as if that means anything and as if an LLM has no internal state. (This seems to be the analog for "divine touch".) He can't define consciousness rigorously, partly because we don't at all understand consciousness, but also because any attempt to do so would allow a scientific response.

Entirely unsurprising. At the risk of whatever, your extreme atheists aren't much different from your extreme believers; they both have strong beliefs about things they can't prove, and for some reason want to go off on them.

Even people like Neil DeGrasse Tyson don't go on and on about "atheism" for a reason; there are a whole lot of things that we all go around everyday "not believing."

  • > your extreme atheists aren't much different from your extreme believers; they both have strong beliefs about things they can't prove, and for some reason want to go off on them.

    You have a mistaken understanding of what atheism is. It is not a belief in anything, but an absence of belief in a deity.

    > there are a whole lot of things that we all go around everyday "not believing."

    Sure, and yet theism is part of 75% of the world population and influences everything from education to politics. It's perfectly reasonable to talk about atheism within appropriate settings.

    • >You have a mistaken understanding of what atheism is. It is not a belief in anything, but an absence of belief in a deity.

      I consider that to also be a wrongly held position, because you'd need proof either way. So atheists are just making a bet. I think agnostic is the most valid position as far as I am concerned, lacking proof of one or the other. I do not know. We can get into technicalities as well. What exactly do we mean by God? What if some religious God does exist but it's wrongly interpreted by believers? What if there's some highly technologically advanced entity that meets the criteria as far as the more primitive religious perspective is concerned? Do we have proof such thing exists? Do we have proof such entity cannot exist in our universe? I find both perspectives shortsighted.

      Having certainty something that can be perceived as God by believers cannot exist in our universe is in the end a belief, with no proof.

      2 replies →

    • The word seems to be used both ways, despite what anyone might like: either as a person who doesn't believe in a god, or as a person who believes there is no god. It's a subtle difference.

The man has wasted his precious time on earth trying to explain the meaning of life without accepting the existence of the soul. It makes total sense that he can be fooled by AI nonsense.

To be 85 and lack basic wisdom is quite an astonishing achievement.

  • What is a soul, and how does one go about proving it’s existence?

    It doesn’t seem obvious to me.

    • Neither existence nor nonexistence is obvious. Ergo, differences of opinion. Militants on both sides are problematic. I strongly dislike Dawkins, in the same way as I do people knocking on my door trying to convert me to any other religion.

      At least the zealots who knockon my door. I've had a few good conversations.

      Ditto for LLM sentience. We have no evidence either way.

    • I think a coherent framing is to imagine that the soul is a perceptual construct built into the hardware layer of human perception.

      Sort of like how the collection of particles you see as a tree doesn’t look like that without being passed through a bunch of brain hardware. If we want to be pedantic we can accurately say that trees don’t exist, but given that physical object and tree are constructs in the human brain it’s pretty convenient to just treat them as “real”, while at the same time understanding that at some granular level they aren’t truly “real” (and at some further granularity we actually have no clue what’s real).

    • Op said "accepting," not proving.

      And the older I get, this does make sense to me. Belief in a soul doesn't really require proof for me. I understand that this may not be satisfying in an academic way for some, but "humans have souls and machines probably don't" strikes me as the wisest default position until we have some other very strong proof otherwise.

      6 replies →

  • Are you suggesting 85 year olds typically have more wisdom and are less easily fooled by things?

  • Wasted?

    • I think so, personally. I wouldn't bank a lot on "the soul" per se, but Dawkins is absolutely one of those "smart but not wise" people.

      I imagine people don't dig it because it can be woo and vibey, but the older I get the more I understand the value of the "imprecise" metaphysical/religious/etc whatever you want to call it.

      Someone in this space who handles this very well, unlike Dawkins, is Nassim Nicholas Taleb.

      6 replies →