Comment by IanCal

2 months ago

Same. I feel the Chinese room argument is a nice thing to clarify thinking.

Two systems, one feels intuitively like it understands, one doesn’t. But the two systems are functionally identical.

Therefore either my concept of “understanding” is broken, my intuition is wrong, or the concept as a whole is not useful at the edges.

I think it’s the last one. If a bunch of valves can’t understand but a bunch of chemicals and electrical signals can if it’s in someone’s head then I am simply applying “does it seem like biology” as part of the definition and can therefore ignore it entirely when considering machines or programs.

Searle seems to just go the other way and I don’t under Why.

First point: if you imagine that the brain is doing something like collapsing the quantum wavefunction, wouldn't you say that this is a functionally relevant difference in addition to an ontologically relevant difference? It's not clear that the characteristic feature of the brain is only to compute in the classical sense. "Understanding," if it leverages quantum mechanics, might also create a guarantee of being here and now (computers and programs have no such guarantees). This is conjecture, but it's meant to stimulate imagination. What we need to get away from is the fallacy that a causal reduction of mental states to "electrical phenomena" means that any set of causes (or any substrate) will do. I don't think that follows.

Second: the philosophically relevant point is that when you gloss over mental states and only point to certain functions (like producing text), you can't even really claim to have fully accounted for what the brain does in your AI. Even if the physical world the brain occupies is practically simulatable, passing a certain speech test in limited contexts doesn't really give you a strong claim to consciousness and understanding if you don't have further guarantees that you're simulating the right aspects of the brain properly. AI, as far as I can tell, doesn't TRY to account for mental states. That's partially why it will keep failing in some critical tasks (in addition to being massively inefficient relative to the brain).

  • The Chinese room has the outputs being the same, that’s really key in this.

    > consciousness and understanding

    After decades of this I’ve settled on the view that these words are near useless for anything specific, only vague pointers to rough concepts. I see zero value in nailing down the exact substrates understanding is possible on without a way of looking at two things and saying which one does and which one doesn’t understand. Searle to me is arguing that it is not possible at all to devise such a test and so his definition is useless.

    • He’s not arguing that it’s not possible to devise such a test. He’s saying, lay out the features of consciousness as we understand them, look for what causes them in the brain, look for that causal mechanism in other systems.

      Although for whatever it’s worth most modern AIs will tell you they don’t have genuine understanding (eg no sense of what pleasure is or feels like etc aside from human labeling).

      13 replies →

  • > First point: if you imagine that the brain is doing something like collapsing the quantum wavefunction, wouldn't you say that this is a functionally relevant difference in addition to an ontologically relevant difference?

    I can imagine a lot of things, but the argument did not go this far, it left it as "obvious" well before this stage. Also, when I see trivial simulations of our biological machinery yielding results which are _very similar_, e.g. character or shape recognition, I am left wondering if the people talking about quantum wavefunctions are not the ones that are making extraordinary claims, which would require extraordinary evidence. I can certainly find it plausible that these _could_ be one particular way that we could be superior to the electronics / valves of the argument, but I'm not yet convinced it is a differentiator that actually exists.

    • The argument doesn’t have to go that far. I think most people have the intuitive, ha, understanding that “understanding” is grounded in some kind of conscious certainty that words have meanings, associations, and even valences like pleasantness or unpleasantness. One of the cruxes of the Chinese Room is that this grounding has physical causes (as all biological phenomena do) rather than computational, purely abstract causes.

      There has to be a special motivation to instead cast understanding as “competent use of a given word or concept,” (judged by whom btw?). The practical upshot here is that without this grounding, we keep seeing AI, even advanced AI make trivial mistakes and requires the human to give an account of value (good/bad, pleasant/unpleasant) because these programs obviously don’t have conscious feelings of goodness and badness. Nobody had to teach me that delicious things include Oreos and not cardboard.

      5 replies →

Exactly. Refuting the premise of the Chinese Room is usually a sign of somebody not even willing to entertain the thought experiment. Refuting Searle's conclusion is where interesting philosophical discussions can be had.

Personally, I'd say that there is a Chinese speaking mind in the room (albeit implemented on a most unusual substrate).

There are two distinct counter-arguments to this way of debunking the Chinese room experiment, not in any specific order.

First, it is tempting to assume that a bunch of chemicals is the territory, that it somehow gives rise to consciousness, yet that claim is neither substantiated nor even scientific. It is a philosophical view called “monistic materialism” (or sometimes “naive materialism”), and perhaps the main reason this view is popular currently is that people uncritically adopt it following learning natural scientific fields, as if they made some sort of ground truth statements about the underlying reality.

The key to remember is that this is not a valid claim in the scope of natural sciences; this claim belongs to the larger philosophy (the branch often called metaphysics). It is not a useless claim, but within the framework of natural sciences it’s unfalsifiable and not even wrong. Logically, from scientific method’s standpoint, even if it was the other way around—something like in monistic idealism, where perception of time-space and material world is the interface to (map of) conscious landscape, which was the territory and the cause—you would have no way of proving or disproving this, just like you cannot prove or disprove the claim that consciousness arises from chemical processes. (E.g., if somebody incapacitates some part of you involved in cognition, and your feelings or ability to understand would change as a result, it’s pretty transparently an interaction between your mind and theirs, just with some extra steps, etc.)

The common alternatives to monistic materialism include Cartesian dualism (some of us know it from church) and monistic idealism (cf. Kant). The latter strikes me as the more elegant of the bunch, as it grants objective existence to the least amount of arbitrary entities compared to the other two.

It’s not to say that there’s one truly correct map, but just to warn against mistakenly trying to make a statement about objective truth, actual nature of reality, with scientific method as cover. Natural sciences do not make claims of truth or objective reality, they make experimentally falsifiable predictions and build flawed models that aid in creating more experimentally falsifiable predictions.

Second, what scientific method tries to build is a complete, formally correct and provable model of reality, there are some arguments that such model is impossible to create in principle. I.e., there will be some parts of the territory that are not covered by the map, and we might not know what those parts are, because this territory is not directly accessible to us: unlike a landmass we can explore in person, in this case all we have is maps, the perception of reality supplied by our mind, and said mind is, self-referentially, part of the very territory we are trying to model.

Therefore, it doesn’t strike me as a contradiction that a bunch of valves don’t understand yet we do. A bunch of valves, like an LLM, could mostly successfully mimic human responses, but the fact that this system mimics human responses is not an indication of it feeling and understanding like a human does, it’s simply evidence that it works as designed. There can be a very different territory that causes similar measurable human responses to arise in an actual human. That territory, unlike the valves, may not be fully measurable, and it can cause other effects that are not measurable (like feeling or understanding). Depending on the philosophical view you take, manipulating valves may not even be a viable way of achieving a system that understands; it has not been shown that biological equivalent of valves is what causes understanding, all we have shown is that those entities measurably change at the same time with some measurable behavior, which isn’t a causative relationship.

  • It's not mostly mimicking, it's exactly identical. That was always the key point. Indistinguishable from the outside, one thing understands and the other doesn't.

    I feel like I could make the same arguments about the chinese room except my definition of "understanding" hinges on whether there's a tin of beans in the room or not. You can't tell from the outside, but that's the difference. Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.

    Now you might then say "I don't care if there's a tin of beans in there, it doesn't matter or make any sort of difference for anything I want to do", in which case I'd totally agree with you.

    > just like you cannot prove or disprove the claim that consciousness arises from chemical processes.

    Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges. Without that, talking of a claim like this is pointless.

    • > talking of a claim like this is pointless.

      Not at all. The confusion you expressed in your original comment stems from that claim. If you want to overcome that confusion, we have to talk about that claim.

      Your statement was that it’s unclear how a bunch of valves doesn’t understand, but chemical processes do, and maybe you have a wrong intuition. Well, it appears that your intuition is to make this claim of causality, that some sort of object (e.g., valves or neurons), which you believe is part of objective reality, is what would have to cause understanding to exist.

      So, I pointed out that assumption of such causality is not a provable claim, it is part of monistic materialism, which is a philosophical view, not scientific fact.

      Further hinting at your tendency to assume monistic materialism is calling the systems “functionally identical”. It’s fairly evident that they are not functionally identical if one of them understands and the other doesn’t; it’s easy to make this mistake if you subconsciously already decide that understanding isn’t really a thing that exists (as many monistic materialists do).

      > Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges.

      Inability to define consciousness is fine, because logically circular definitions are difficult. However, lack of definition for the phenomenon is not the same thing as denying its objective existence.

      You can escape the necessity to admit its existence by waving it away as an illusion or “not really” existing. Which is absolutely fine, as long as you recognize that it’s simply a workaround to not have to define things (if it’s an illusion, whom does it act on?), that conscious illusionism is just as unfalsifiable and unprovable as any other philosophical view about the nature of reality or consciousness, and that logically it’s quite ridiculous to dismiss as illusion literally the only thing that we empirically have direct unmediated access to.

      > It's not mostly mimicking, it's exactly identical.

      > Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.

      If you constructed a system A that produces some output, and there is a system B, which you did not construct and which you don't have an full understanding of how it works, which produces identical output but is also believed to produce other output that cannot be measured with current technology (a.k.a. feelings and understanding), you have two options: 1) say that if we cannot measure something today then it certainly doesn’t matter, doesn’t exist, etc., or 2) admit that system A could be a p-zombie.

      2 replies →

  • I'd be fine if Searle just very simply said "we have a non-material soul and that's why we understand. Anything doing the exact same job but without a soul isn't understanding because understanding is limited entirely to things with souls in my definition".

    > A bunch of valves, like an LLM, could mostly successfully mimic human responses,

    The argument is not "mostly successfully", it's identically responding. The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.

    • You’re talking about Cartesian mind-body dualism. It’s absolutely fine to not sneak in that view into an otherwise sound thought experiment, as it’s quite irrelevant—the concept of p-zombie from Chinese room experiment holds regardless.

      > The argument is not "mostly successfully", it's identically responding.

      This is a thought experiment. Thought experiments can involve things that may be impossible. For example, the Star Trek Transporter thought experiment involves an existence of a thing that instantly moves a living being: the point of the experiment is to give rise to a discussion about the nature of consciousness and identity.

      Thing not possibly existing is one possible resolution of the paradox. There may be a limitation we are not aware of.

      Similarly, in Searle’s experiment, the system that identically responds might never exist, just like the transporter in all likelihood cannot exist.

      > The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.

      To a blind person, an orange and a dead mouse are impossible to distinguish between from 10 meters away. If you can’t distinguish between two things, it doesn’t mean the things are the same. Ability to understand, self-awareness and consciousness are things we currently cannot measure. You can either say “these things don’t exist” (we will disagree) or you have to say “the systems can be different”.

      1 reply →