← Back to context

Comment by kalkin

19 hours ago

This basically Searle's Chinese Room argument. It's got a respectable history (... Searle's personal ethics aside) but it's not something that has produced any kind of consensus among philosophers. Note that it would apply to any AI instantiated as a Turing machine and to a simulation of human brain at an arbitrary level of detail as well.

There is a section on the Chinese Room argument in the book.

(I personally am skeptical that LLMs have any conscious experience. I just don't think it's a ridiculous question.)

That philosophers still debate it isn’t a counterargument. Philosophers still debate lots of things. Where’s the flaw in the actual reasoning? The computation is substrate-independent. Running it slower on paper doesn’t change what’s being computed. If there’s no experiencer when you do arithmetic by hand, parallelizing it on silicon doesn’t summon one.

  • Exactly what part of your brain can you point to and say, "This is it. This understands Chinese" ? Your brain is every bit a Chinese Room as a Large Language Model. That's the flaw.

    And unless you believe in a metaphysical reality to the body, then your point about substrate independence cuts for the brain as well.

  • It would be pretty arrogant, I think, though possibly classic tech-bro behavior, for Anthropic to say, "you know what, smart people who've spent their whole lives thinking and debating about this don't have any agreement on what's required for consciousness, but we're good at engineering so we can just say that some of those people are idiots and we can give their conclusions zero credence."