Comment by patio11
15 years ago
Props for the reference, but aside from matching /Chin/ the Chinese room has absolutely nothing to do with this approach. It is staffed by a single guy and it is critically important to the thought experiment that he does not understand Chinese.
(Brief sketch of the Chinese room: there is a locked room with a slit which permits paper to come in and paper to go out. Inside the room is a man who does not speak Chinese. He receives paper with Chinese symbols on it, consults a vast library of books with rules on what to do in response to particular symbols, laboriously copies his response onto paper, and pushes it out through the slit. The response is intelligible as Chinese responsive to the input Chinese. Searle argues that the man can't understand Chinese. Personal opinion: it's navelgazing that only matters to philosophy, but I think the man and books together constitute a system which speaks Chinese, in the same way that people bidding in an auction together constitute an efficient price discovery mechanism even if none has expert knowledge of the "true value" of all items at auction.)
aside from matching /Chin/ the Chinese room has absolutely nothing to do with this approach. It is staffed by a single guy and it is critically important to the thought experiment that he does not understand Chinese.
It raises an alternate formulation: What if the operation of the room was crowdsourced? What if the individuals of the crowd could communicate and organize, or what if not? Would that change the properties of the thought experiment in any interesting ways?
Obviously, if the room included Chinese people the language would have to be different. We could call it the "English room" thought experiment.
The point is not whether the system understands Chinese, the point is that such an "algorithm" or "system" does not produce human-like consciousness.
> The point is not whether the system understands Chinese
From early in Searle's paper: "Partisans of strong AI claim that in this question and answer sequence the machine is not only simulating a human ability but also 1. that the machine can literally be said to understand the story and provide the answers to questions, and 2. that what the machine and its program do explains the human ability to understand the story and answer questions about it. Both claims seem to me to be totally unsupported by Schank's work, as I will attempt to show in what follows."
And, just after describing the "Chinese room" scenario: "Now the claims made by strong AI are that the programmed computer understands the stories and that the program in some sense explains human understanding. But we are now in a position to examine these claims in light of our thought experiment."
> human-like consciousness
No, Searle is not at all only concerned with "consciousness". Searle again:
""" "But could something think, understand, and so on solely in virtue of being a computer with the right sort of program? Could instantiating a program, the right program of course, by itself be a sufficient condition of understanding?" This I think is the right question to ask, though it is usually confused with one or more of the earlier questions, and the answer to it is no. """
(The focus on "consciousness" is a more recent development, and I cynically suspect it's motivated by a recognition that as far as anything we can observe goes, computers are in fact likely to be able to do everything humans can in the not too distant future -- so best to concentrate on something conveniently unfalsifiable, such as the claim that computers couldn't really be "conscious" even if they behaved in every respect exactly as if they were.)
Searle talks about "understanding" throughout. He occasionally makes reference to other mental capabilities, including "consciousness" once or twice, but "understanding" is much the most frequent.
"understand" is an overloaded term. It could mean a few things:
- Human like understanding, i.e. awareness, consciousness
This is what the Chinese room experiment is designed to dispute
- Ability to produce appropriate output
We often use "understand" to mean this. e.g. "I wrote a parser that understands Ruby code and compiles it to C".
Of course the Chinese room "understands" Chinese in the second sense, but not the first sense.
You first quote is describing what I consider to be awareness/consciousness. Maybe Searle didn't use the same word, but I believe he's describing the same notion.
Think of it this way: a C compiler doesn't really "understand" C code in the same way that a human does. For instance, it can't make changes to the code. If it could, it would replace the programmer.
Edit:
From wikipedia: http://en.wikipedia.org/wiki/Chinese_room
> The experiment is the centerpiece of Searle's Chinese Room Argument which holds that a program cannot give a computer a "mind" or "understanding", regardless of how intelligently it may make it behave.