← Back to context

Comment by kalkin

15 hours ago

This basically Searle's Chinese Room argument. It's got a respectable history (... Searle's personal ethics aside) but it's not something that has produced any kind of consensus among philosophers. Note that it would apply to any AI instantiated as a Turing machine and to a simulation of human brain at an arbitrary level of detail as well.

There is a section on the Chinese Room argument in the book.

(I personally am skeptical that LLMs have any conscious experience. I just don't think it's a ridiculous question.)

That philosophers still debate it isn’t a counterargument. Philosophers still debate lots of things. Where’s the flaw in the actual reasoning? The computation is substrate-independent. Running it slower on paper doesn’t change what’s being computed. If there’s no experiencer when you do arithmetic by hand, parallelizing it on silicon doesn’t summon one.

  • Exactly what part of your brain can you point to and say, "This is it. This understands Chinese" ? Your brain is every bit a Chinese Room as a Large Language Model. That's the flaw.

    And unless you believe in a metaphysical reality to the body, then your point about substrate independence cuts for the brain as well.