← Back to context

Comment by siglesias

2 months ago

I’m with you 98% of the way and then things take a sharp turn. In your world, the mere behavior determines the understanding. In Searle’s the CAUSES of the behavior determine the understanding. The causes are knowable. He stipulates a setup where there’s an epidemic boundary to show that you can have apparent behavior but a fundamental difference in causes that can make you have a point of view on whether there is genuine understanding. If you don’t like this term, you can say conscious understanding. As I said before, there has to be a categorical distinction between a system that feels and a system that is pretending to feel. The distinction you make between machine 1 and machine 2 is correct. The stipulation is that machine 1 has physical causes that produce the physical phenomenon of consciousness (think about how various substances alter conscious feelings, such as pain killers and anesthetics), and machine 2 also has physical causes but the physical causes are doing something different, they’re modifying symbols to execute program steps, and if you like the “output” is just other symbols. Those symbols only have meaning by matter of interpretation and convention and there’s no physical truth to their meaning.

So if you like, one is real and the other is fake. Or, one is physical and the other is symbolic or conventional. One actually had breakfast this morning and the other is lying about having breakfast to pass the Turing Test. One can feel pain, guilt, shame and the other one is just saying that it does because it’s running a program.

Searle says there is an empirical test for which domain a thinking object falls into (your machine 1 and machine 2)—to an outside observer, in the limit, there is no difference in behavior. They will do the same thing. For all that, if you have a metaphysical value for consciousness and “genuine” feeling, then you think the difference is important. If you don’t, you don’t.

FWIW—I think once AI has a full understanding of its ontology, even if it’s simulating a human brain perfectly, if it knows it’s a program it will probably explain to us why it is or is not necessarily conscious. Perhaps that will be more convincing for you.