← Back to context

Comment by kalkin

12 hours ago

This basically Searle's Chinese Room argument. It's got a respectable history (... Searle's personal ethics aside) but it's not something that has produced any kind of consensus among philosophers. Note that it would apply to any AI instantiated as a Turing machine and to a simulation of human brain at an arbitrary level of detail as well.

There is a section on the Chinese Room argument in the book.

(I personally am skeptical that LLMs have any conscious experience. I just don't think it's a ridiculous question.)

That philosophers still debate it isn’t a counterargument. Philosophers still debate lots of things. Where’s the flaw in the actual reasoning? The computation is substrate-independent. Running it slower on paper doesn’t change what’s being computed. If there’s no experiencer when you do arithmetic by hand, parallelizing it on silicon doesn’t summon one.

  • The same is true of humans, and so the argument fails to demonstrate anything interesting.

    • > The same is true of humans,

      What is? That you can run us on paper? That seems demonstrably false