Comment by siglesias
2 months ago
First point: if you imagine that the brain is doing something like collapsing the quantum wavefunction, wouldn't you say that this is a functionally relevant difference in addition to an ontologically relevant difference? It's not clear that the characteristic feature of the brain is only to compute in the classical sense. "Understanding," if it leverages quantum mechanics, might also create a guarantee of being here and now (computers and programs have no such guarantees). This is conjecture, but it's meant to stimulate imagination. What we need to get away from is the fallacy that a causal reduction of mental states to "electrical phenomena" means that any set of causes (or any substrate) will do. I don't think that follows.
Second: the philosophically relevant point is that when you gloss over mental states and only point to certain functions (like producing text), you can't even really claim to have fully accounted for what the brain does in your AI. Even if the physical world the brain occupies is practically simulatable, passing a certain speech test in limited contexts doesn't really give you a strong claim to consciousness and understanding if you don't have further guarantees that you're simulating the right aspects of the brain properly. AI, as far as I can tell, doesn't TRY to account for mental states. That's partially why it will keep failing in some critical tasks (in addition to being massively inefficient relative to the brain).
The Chinese room has the outputs being the same, that’s really key in this.
> consciousness and understanding
After decades of this I’ve settled on the view that these words are near useless for anything specific, only vague pointers to rough concepts. I see zero value in nailing down the exact substrates understanding is possible on without a way of looking at two things and saying which one does and which one doesn’t understand. Searle to me is arguing that it is not possible at all to devise such a test and so his definition is useless.
He’s not arguing that it’s not possible to devise such a test. He’s saying, lay out the features of consciousness as we understand them, look for what causes them in the brain, look for that causal mechanism in other systems.
Although for whatever it’s worth most modern AIs will tell you they don’t have genuine understanding (eg no sense of what pleasure is or feels like etc aside from human labeling).
> He’s not arguing that it’s not possible to devise such a test.
The entire point of the thought experiment is that to outside observers it appears the same as if a fluent speaker is in the room. There aren’t questions you can ask to tell the difference.
12 replies →
> First point: if you imagine that the brain is doing something like collapsing the quantum wavefunction, wouldn't you say that this is a functionally relevant difference in addition to an ontologically relevant difference?
I can imagine a lot of things, but the argument did not go this far, it left it as "obvious" well before this stage. Also, when I see trivial simulations of our biological machinery yielding results which are _very similar_, e.g. character or shape recognition, I am left wondering if the people talking about quantum wavefunctions are not the ones that are making extraordinary claims, which would require extraordinary evidence. I can certainly find it plausible that these _could_ be one particular way that we could be superior to the electronics / valves of the argument, but I'm not yet convinced it is a differentiator that actually exists.
The argument doesn’t have to go that far. I think most people have the intuitive, ha, understanding that “understanding” is grounded in some kind of conscious certainty that words have meanings, associations, and even valences like pleasantness or unpleasantness. One of the cruxes of the Chinese Room is that this grounding has physical causes (as all biological phenomena do) rather than computational, purely abstract causes.
There has to be a special motivation to instead cast understanding as “competent use of a given word or concept,” (judged by whom btw?). The practical upshot here is that without this grounding, we keep seeing AI, even advanced AI make trivial mistakes and requires the human to give an account of value (good/bad, pleasant/unpleasant) because these programs obviously don’t have conscious feelings of goodness and badness. Nobody had to teach me that delicious things include Oreos and not cardboard.
> Nobody had to teach me that delicious things include Oreos and not cardboard.
Well, no, that came from billions of years of pre-training that just got mostly hardcoded into us, due to survival / evolutionary pressure. If anything, the fact that AI is as far as it is, after less than 100 years of development, is shocking. I recall my uncle trounce our C64 in chess, and go on to explain how machines don't have intuition, and the search space explodes combinatorically, which is why they will never beat a competent human. This was ~10 years before Deep Blue. Oh, sure, that's just a party trick. 10 years ago, we didn't have GPT-style language understanding, or image generation (at least, not widely available nor of middling quality). I wonder what we will have in 10, 20, 100 years - whatever it is, I am fairly confident that architectural improvements will lead to large capability improvements eventually, and that current behavior and limitations are just that, current. So, the argument is that somehow, intuitively they can't ever be truly intelligent or conscious because it's somehow intuitively obvious? I disagree with this argument; I don't think we have any real, scientific idea of what consciousness really is, nor do we have any way to differentiate "real" from "fake".
On the other end of the spectrum, I have seen humans with dementia not able to make sense of the world any more. Are they conscious? What about a dog, rabbit, cricket, bacterium? I am pretty sure at their own level, they certainly feel like they are alive and conscious. I don't have any real answers, but it certainly seems to be a spectrum, and holding on to some magical or esoteric differentiator, like emotions or feelings, seems like wishful thinking to me.
4 replies →