Comment by Kim_Bruning
13 hours ago
Oh, I've always wanted to debate him about the chinese room. I disagree with him, passionately. And that's the most fun debate to have. Especially when it's someone who is actually really skilled and knowledgeable and nuanced!
Maybe I should look up some of my other heroes and heretics while I have the chance. I mean, you don't need to cold e-mail them a challenge. Sometimes they're already known to be at events and such, after all!
Searle has written responses to dozens of replies to the Chinese Room. It's likely that you can find his rebuttals to your objection in the Stanford Encyclopedia of Philosophy's entry on the Chinese Room, or deeper in a source in the bibliography. Is your rebuttal listed here?
https://plato.stanford.edu/entries/chinese-room
> In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese.
I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.
I would encourage deeply digging into the intuition that brain states and computer states are the same. Start with what you know, and then work backwards and see whether you still think they aren’t different. For example, we have an intuitive understanding of what kinds of flavors (for us) are delicious versus not. Or what kinds of sounds are pleasant versus not. Etc. If I close my eyes, I can see the color purple. I know that Nutella is delicious, and I can imagine its flavor at will. I share Searle’s intuition that the universe would be a strange place if these feelings of understanding (and pleasantness!) were simply functions not of physical states but of abstract program states. Keep in mind—what counts as a bit is simply a matter of convention. In one computer system, it could be a minute difference in voltage in a transistor. In another, it could be the presence of one element versus another. In another, it could be whether a chamber contains water or not. In another, it could be markings on a page. On and on. On the strong AI thesis, any system that runs steps in this program would not just produce functionally equivalent output to brains, but they would be forced to have mental states too, like imagining the taste of Nutella. To me, that’s implausible. Once you start digging in, you realize that either the Chinese Room is missing something, our understanding of physical reality is incomplete, OR that you have to bite the bullet that the universe creates mental states when systems implement the right program—but then you’re left with the puzzle of how it is that there is a tie between the physical world and the abstract world of symbols (how can causing a mark on a page cause mental states).
So what’s the physical cause for consciousness and understanding that is not computable? It’s worth noting that once you really start digging in, if for example you took the hypothesis that “consciousness is a sequence of microtubule-orchestrated collapses of the quantum wavefunction” [1], then you can see a series of physical requirements for consciousness and understanding that forces all conscious beings onto: 1) roughly the same clock (because consciousness shares a cause), and 2) the same reality (because consciousness causes wavefunction collapses). That’s something you could not do merely by simulating certain brain processes in a closed system.
1) Not saying this is correct, but it invites one to imagine that consciousness could have physical requirements that play in some of the oddities of the (shared) quantum world. https://x.com/StuartHameroff/status/1977419279801954744
Same. I feel the Chinese room argument is a nice thing to clarify thinking.
Two systems, one feels intuitively like it understands, one doesn’t. But the two systems are functionally identical.
Therefore either my concept of “understanding” is broken, my intuition is wrong, or the concept as a whole is not useful at the edges.
I think it’s the last one. If a bunch of valves can’t understand but a bunch of chemicals and electrical signals can if it’s in someone’s head then I am simply applying “does it seem like biology” as part of the definition and can therefore ignore it entirely when considering machines or programs.
Searle seems to just go the other way and I don’t under Why.
4 replies →
All you have to do is train an LLM on the collected works and letters of John Searle; you could then pass your arguments along to the machine and out would come John Searle's thoughtful response...
Something that would resemble 'John Searle's thoughtful response'...
I'll posit that the distinction does not matter: the whole Chinese Room line of discourse has been counterproductive to putting in actual work.
I don't think John Searle would agree.
You're absolutely right!