← Back to context

Comment by mellosouls

12 hours ago

Non-paywalled obit:

https://www.theguardian.com/world/2025/oct/05/john-searle-ob...

His most famous argument:

https://en.wikipedia.org/wiki/Chinese_room

I find the Chinese room argument to be nearly toothless.

The human running around inside the room doing the translation work simply by looking up transformation rules in a huge rulebook may produce an accurate translation, but that human still doesn't know a lick of Chinese. Ergo (they claim) computers might simulate consciousness, but will never be conscious.

But is the Searle room, the human is the equivalent of, say, ATP in the human brain. ATP powers my brain while I'm speaking English, but ATP doesn't know how to speak English just like the human in the Searle room doesn't know how to speak Chinese.

  • There is no translation going on in that thought experiment, though. There is text processing. That is, the man in the room receives Chinese text through a slot in the door. He uses a book of complex instructions that tells him what to do with that text, and he produces more Chinese text as a response according to those instructions.

    Neither the man, nor the room "understand" Chinese. It is the same for the computer and its software. Jeffery Hinton has sad "but the system understands Chinese." I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.

    Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.

    • > I don't think that's a true statement, because at no point is the "system" dealing with semantic context of the input. It only operates algorithmically on the input, which is distinctly not what people do when they read something.

      There are two possibilities here. Either the Chinese room can produce the exact same output as some Chinese speaker would given a certain input, or it can't. If it can't, the whole thing is uninteresting, it simply means that the rules in the room are not sufficient and so the conclusion is trivial.

      However, if it can produce the exact same output as some Chinese speaker, then I don't see by what non-spiritualistic criteria anyone could argue that it is fundamentally different from a Chinese speaker.

      Edit: note that here when I'm saying that the room can respond with the same output as a human Chinese speaker, that includes the ability for the room to refuse to answer a question, to berate the asker, to start musing about an old story or other non-sequiturs, to beg for more time with the asker, to start asking the akser for information, to gossip about previous askers, and so on. Basically the full range of language interactions, not just some LLM style limited conversation. The only limitations in its responses would be related to the things it can't physically do - it couldn't talk about what it actually sees or hears, because it doesn't have eyes, or ears, it couldn't truthfully say it's hungry, etc. It would be limited to the output of a blind, deaf, mute Chinese speaker confined to a room whose skin is numb and who is being fed intravenously, etc.

      4 replies →

    • > It only operates algorithmically on the input, which is distinctly not what people do when they read something.

      That's not at all clear!

      > Language, when conveyed between conscious individuals creates a shared model of the world. This can lead to visualizations, associations, emotions, creation of new memories because the meaning is shared. This does not happen with mere syntactic manipulation. That was Searle's argument.

      All of that is called into question with some LLM output. It's hard to understand how some of that could be produced without some emergency model of the world.

      8 replies →

    • That is why you cannot ask the room for semantic changes. Like “if I call an umbrella a monkey, and it will rain today, what do I need to bring?”

      Unless we suppose those books describe how to implement a memory of sorts, and how to reason, etc. But then how sure are we it’s not conscious?

      1 reply →