Comment by siglesias
2 months ago
He’s not arguing that it’s not possible to devise such a test. He’s saying, lay out the features of consciousness as we understand them, look for what causes them in the brain, look for that causal mechanism in other systems.
Although for whatever it’s worth most modern AIs will tell you they don’t have genuine understanding (eg no sense of what pleasure is or feels like etc aside from human labeling).
> He’s not arguing that it’s not possible to devise such a test.
The entire point of the thought experiment is that to outside observers it appears the same as if a fluent speaker is in the room. There aren’t questions you can ask to tell the difference.
That's not the entire point, but it is the a big part of the premise. The entire point, on the contrary, is that the system inside the room does not have anything with conscious understanding of Chinese DESPITE passing the Turing Test. It's highlighting precisely that there's an ontological difference between the apparent behavior of the system and the reality of it.
Of course it’s the point, the systems are not distinguishable by behaviour only what’s inside them. There are not tests to determine what’s inside, otherwise the whole thing is pointless.
This was why I have the tin of beans comparison.
The room has the property X if and only if there’s a tin of beans inside. You can’t in any way tell the difference between a room that has a tin of beans in and one that doesn’t without looking inside.
You might find that a property that has zero predictive power, makes (by definition) no difference to what either room can do, and has no use for any practical purposes (again by definition) is rather pointless. I would agree.
Searle has a definition of understanding that, to me, cannot be useful for any actual purpose. It is therefore irrelevant to me if any system has his special property just as my tin of beans property is useless.
10 replies →