← Back to context

Comment by habitue

18 hours ago

I am confused about why Gary Marcus thinks it's so obvious that Claude isn't conscious. As he points out, Dawkins is just taking a bog-standard behaviorist position: that he can't distinguish Claude from a conscious being just by the behavior here.

Marcus is saying "Well, if you knew they were trained to mimic, then you'd understand it's just mimicry and not real consciousness" The problem with this argument is that we just don't have a good idea what "real consciousness" is. What if, in order to simulate human text prediction with sufficient accuracy, the model has to assemble sub-networks internally into something equivalent to a conscious mind? We could disprove that kind of thing really quickly if we knew how to define consciousness really well, but we kinda don't!

Philosophers are genuinely split on this question, it's totally reasonable to be on either side of this based on your personal intuition. Marcus's position seems to be actually based on his own personal incredulity, despite his claims that understanding LLM training methodology gives him some special insight into the internal experience (or lack thereof) of an LLM.

(The Claude Delusion is a banger title though)

Gary Marcus here is making an argument about souls and just doesn't realize it. You could rewrite this whole post replacing "consciousness" with "soul" and it would flow almost the same.

He handwaves consciousness as "internal states" as if that means anything and as if an LLM has no internal state. (This seems to be the analog for "divine touch".) He can't define consciousness rigorously, partly because we don't at all understand consciousness, but also because any attempt to do so would allow a scientific response.