← Back to context

Comment by IanCal

2 months ago

The Chinese room has the outputs being the same, that’s really key in this.

> consciousness and understanding

After decades of this I’ve settled on the view that these words are near useless for anything specific, only vague pointers to rough concepts. I see zero value in nailing down the exact substrates understanding is possible on without a way of looking at two things and saying which one does and which one doesn’t understand. Searle to me is arguing that it is not possible at all to devise such a test and so his definition is useless.

He’s not arguing that it’s not possible to devise such a test. He’s saying, lay out the features of consciousness as we understand them, look for what causes them in the brain, look for that causal mechanism in other systems.

Although for whatever it’s worth most modern AIs will tell you they don’t have genuine understanding (eg no sense of what pleasure is or feels like etc aside from human labeling).

  • > He’s not arguing that it’s not possible to devise such a test.

    The entire point of the thought experiment is that to outside observers it appears the same as if a fluent speaker is in the room. There aren’t questions you can ask to tell the difference.

    • That's not the entire point, but it is the a big part of the premise. The entire point, on the contrary, is that the system inside the room does not have anything with conscious understanding of Chinese DESPITE passing the Turing Test. It's highlighting precisely that there's an ontological difference between the apparent behavior of the system and the reality of it.

      11 replies →