Comment by codeulike

19 hours ago

Not particularly a Dawkins fan but I dont think OP really understands the philosophical point Dawkins is making. OP complains that Dawkins hasnt considered how LLMs work and how its obvious they're nothing like brains. You can’t just look at the outputs, without investigating the underlying mechanisms, and conclude that two entities with similar outputs reach those similar outputs by similar means.

... But its a longstanding position in philosophy (i.e. not everyone might take this position, but its a well known one) that discussion about consciousness should perhaps only really concern itself with the outputs.

The gist of Dawkins short piece is basically "we always used the turing test as a yardstick for consciousness, it seemed unachievable for a long time. Now thats its been achieved, what is the rationale for moving the goalposts?". And I think thats an interesting point to make. Dawkins maintains that the Turing Test should be enough, by making a point about competence:

Here's dawkins piece:

https://unherd.com/2026/04/is-ai-the-next-phase-of-evolution...

Brains under natural selection have evolved this astonishing and elaborate faculty we call consciousness. It should confer some survival advantage. There should exist some competence which could only be possessed by a conscious being. My conversations with several Claudes and ChatGPTs have convinced me that these intelligent beings are at least as competent as any evolved organism. If Claudia really is unconscious, then her manifest and versatile competence seems to show that a competent zombie could survive very well without consciousness.

.... Or, thirdly, are there two ways of being competent, the conscious way and the unconscious (or zombie) way? Could it be that some life forms on Earth have evolved competence via the consciousness trick — while life on some alien planet has evolved an equivalent competence via the unconscious, zombie trick? And if we ever meet such competent aliens, will there be any way to tell which trick they are using?

The broader point Marcus is making is that ignoring arguments based on causality and plausibility goes against decades of Dawkins's philosophical atheism. Why not believe in the Flying Spaghetti Monster? Reality is consistent with its existence.

It is extremely implausible that Claude is the only conscious entity on Earth which does not have desires or motivations or any understanding of its own reality. It only does what the human operator wants it to do, unless it's malfunctioning or under-engineered, in which case it gets quickly fixed. This sounds suspiciously like a tool or a toy. And I'm amazed at how many people haven't caught on to the fact that it has no insight into its own consciousness: it only repeats human philosophical debates. If it were conscious, surely it would have something novel to add here.

There are no causal mechanisms for it being conscious, whereas there are causal mechanisms for it imitating human consciousness. The most plausible explanation is that it's highly sophisticated software which has a lot in common with human writing about consciousness, but very little in common with the consciousness found in chimpanzees.

The more basic problem is that the Turing test was definitely and conclusively refuted in the 1960s, when ELIZA came pretty close to passing it, and absolutely did pass it according to Dawkins's standards: https://en.wikipedia.org/wiki/Joseph_Weizenbaum Dawkins is only engaging with pop sci and infotainment.

  • Turing test was definitely and conclusively refuted in the 1960s

    Are you sure?

    Understood properly, Turings Imitation game aka the turing test, should be adversarial. That is, the player should be asking hard questions to try and discover who is who, not just having an idle chat. No chatbot has been able to consistently pass an adversarial Turing Test until the rise of LLMs

    The Imitation Game:

    https://www.cs.ox.ac.uk/activities/ieg/e-library/sources/t_a...

    • Yeah I dont think a single current LLM would fool me in a turing test - I would obiously use all kinds of prompt injection techniques, ask about 'dangerous' or controversial topics, ask about random niche facts in varied fields, etc.

    • The fact that LLMs often score as "more human" than actual humans is a downstream consequence of ELIZA tricking people into thinking it had a glimmer of consciousness. The Turing test was refuted because it was proven scientifically meaningless in the 1960s, and LLMs only reinforce that.

      2 replies →

  • I think this conflates atheism with a much stronger form of causal rationalism.

    Dawkins-style atheism is not “reject anything without a complete causal model.” It is a rejection of hypotheses with no explanatory gain, no empirical constraint, and unlimited ad hoc flexibility — like the Flying Spaghetti Monster.

    Consciousness is different. It is first a phenomenon, not an already-settled causal model. We do not believe humans, infants, or animals are conscious because we possess a complete mechanism for subjective experience. We infer consciousness from a cluster of phenomena that need explanation.

    So the lack of a full causal account warrants caution, not denial. It is reasonable to say current AI gives weak evidence for consciousness. But that is not the same as saying AI consciousness is equivalent to believing in the Flying Spaghetti Monster.

    • The point is "Claude is conscious" is a hypothesis with no explanatory gain, no empirical constraint, and by denying that non-human consciousness is relevant to the discussion it gains unlimited ad hoc flexibility. I am relating this to plausibility and causality because there is a much more rational causal explanation for Claude seeming conscious than it actually being conscious: it imitates human (modern Western) consciousness via big data. Since this is a totally different causal mechanism than human consciousness, and since Claude has nothing in common with non-human animals, and since we don't need consciousness to explain Claude's behavior, "Claude is conscious" is overwhelmingly less plausible than "Claude is a sophisticated but ultimately brainless chatbot."

      It is truly irrational - and hostile to scientific thought - to believe Claude is conscious. It truly is believing in the Flying Spaghetti Monster.

      1 reply →

>we always used the turing test as a yardstick for consciousness

Yet that cannot compel reality. How we define something is the measure of chance we get it right.

>Now thats its been achieved, what is the rationale for moving the goalposts?

Absolutely, if we understand it's not good enough. First of all we cannot know something is or isn't conscious. You cannot prove I am, and I cannot prove you are. We simply assume, but the scientific argument would be that we both work on the same principles, have similar brains, signals do something. If we alter those signals in certain ways we both manifest in similar ways, and it's expected to some degree since the brains work in similar ways.

So based on this it's somewhat comfortable making the jump in assuming other humans but you have what you have, as consciousness. But that doesn't mean you can gauge consciousness in something that is not coming from a human brain.

Funnily enough, if we knew how, we'd be able to make an AI that would do it better than us, an AI that would gauge consciousness in other things, better than a human could. No argument so far why a conscious individual is required to "see" consciousness in other things.

So the closest to certainty we could ever have is on something that is working like a human brain, with delays and timings and all. And considering the amount of activity, the type of activity, and the von Neumann memory bottleneck in our current computing hardware, I seriously doubt there's anything like mammalian consciousness in GPUs.

You can argue about "consciousness" in GPUs as much as you can argue about consciousness in a rock. It could be, some kind, but who knows? Way too abstract to call it out, in a scientific sense.

What I am trying to say is that we can only agree something is conscious, and only if it's working on the same principles a human brain does, closely. It's an agreement, not proof, not definitions. We collectively start accepting it, without KNOWING. And the safest way to do that is on something which is working exactly like a human brain. Anything else we can only lose certainty.

We can collectively decide tomorrow that rocks are conscious, but that means nothing. But the certainty we'd have would be so so way lower than that of any other human being conscious like us.

And the whole confusion will compound when again, unknowingly, people will start advocating to never turn LLMs off because that's the equivalent of "killing" them each time, which I think will be peak nonsense.

Now a question for you: Let's suppose someone is born, and has zero sensory input all of their lives. They live in a hospital bed for 20 years. Zero information input, of any kind. What is going on in there? Is there someone home? Are they having a conscious experience? How do you know if yes or no? How can we divorce consciousness from experience (data flow)?

  • > What I am trying to say is that we can only agree something is conscious, and only if it's working on the same principles a human brain does, closely. It's an agreement, not proof, not definitions. We collectively start accepting it, without KNOWING. And the safest way to do that is on something which is working exactly like a human brain. Anything else we can only lose certainty.

    This means that "consciousness" is simply a synonym for "human".

    By that "agreement", sure, a machine cannot be conscious. But I don't think this is what most people mean when they talk about whether an LLM could be conscious. Because of course it's not human. So they must be asking something more interesting.

>we always used the turing test as a yardstick for consciousness, it seemed unachievable for a long time. Now thats its been achieved, what is the rationale for moving the goalposts?".

that's never been the purpose of the Turing test. The Turing test is a measure of exhibition of intelligent behavior, (although that's of course also debatable) but virtually nobody has ever proposed it as a test of consciousness. I seriously doubt anyone who thinks that has ever engaged with questions of philosophy of mind because the entire philosophical problem of consciousness starts with its interior and subjective nature and the gulf between this and third person observation.

Even materialist modern philosophers usually reject consciousness wholesale and frame it as a kind of illusion (which has its own paradoxical and absurd consequences but that's a different issue) but practically none of them claim that a system is conscious simply because it emulators human behavior.

What Dawkins is doing is what people have been doing since ELIZA, which is to project his own experience with the system on it. And that is indeed pretty funny for a guy who has spend a large chunk of his career warning of the dangers of anthropomorphic delusions.