← Back to context

Comment by IanCal

8 hours ago

Same. I feel the Chinese room argument is a nice thing to clarify thinking.

Two systems, one feels intuitively like it understands, one doesn’t. But the two systems are functionally identical.

Therefore either my concept of “understanding” is broken, my intuition is wrong, or the concept as a whole is not useful at the edges.

I think it’s the last one. If a bunch of valves can’t understand but a bunch of chemicals and electrical signals can if it’s in someone’s head then I am simply applying “does it seem like biology” as part of the definition and can therefore ignore it entirely when considering machines or programs.

Searle seems to just go the other way and I don’t under Why.

Exactly. Refuting the premise of the Chinese Room is usually a sign of somebody not even willing to entertain the thought experiment. Refuting Searle's conclusion is where interesting philosophical discussions can be had.

Personally, I'd say that there is a Chinese speaking mind in the room (albeit implemented on a most unusual substrate).

There are two distinct counter-arguments to this way of debunking the Chinese room experiment, not in any specific order.

First, it is tempting to assume that a bunch of chemicals is the territory, that it somehow gives rise to consciousness, yet that claim is neither substantiated nor even scientific. It is a philosophical view called “monistic materialism” (or sometimes “naive materialism”), and perhaps the main reason this view is popular currently is that people uncritically adopt it following learning natural scientific fields, as if they made some sort of ground truth statements about the underlying reality.

The key to remember is that this is not a valid claim in the scope of natural sciences; this claim belongs to the larger philosophy (the branch often called metaphysics). It is not a useless claim, but within the framework of natural sciences it’s unfalsifiable and not even wrong. Logically, from scientific method’s standpoint, even if it was the other way around—something like in monistic idealism, where perception of time-space and material world is the interface to (map of) conscious landscape, which was the territory and the cause—you would have no way of proving or disproving this, just like you cannot prove or disprove the claim that consciousness arises from chemical processes. (E.g., if somebody incapacitates some part of you involved in cognition, and your feelings or ability to understand would change as a result, it’s pretty transparently an interaction between your mind and theirs, just with some extra steps, etc.)

The common alternatives to monistic materialism include Cartesian dualism (some of us know it from church) and monistic idealism (cf. Kant). The latter strikes me as the more elegant of the bunch, as it grants objective existence to the least amount of arbitrary entities compared to the other two.

It’s not to say that there’s one truly correct map, but just to warn against mistakenly trying to make a statement about objective truth, actual nature of reality, with scientific method as cover. Natural sciences do not make claims of truth or objective reality, they make experimentally falsifiable predictions and build flawed models that aid in creating more experimentally falsifiable predictions.

Second, what scientific method tries to build is a complete, formally correct and provable model of reality, there are some arguments that such model is impossible to create in principle. I.e., there will be some parts of the territory that are not covered by the map, and we might not know what those parts are, because this territory is not directly accessible to us: unlike a landmass we can explore in person, in this case all we have is maps, the perception of reality supplied by our mind, and said mind is, self-referentially, part of the very territory we are trying to model.

Therefore, it doesn’t strike me as a contradiction that a bunch of valves don’t understand yet we do. A bunch of valves, like an LLM, could mostly successfully mimic human responses, but the fact that this system mimics human responses is not an indication of it feeling and understanding like a human does, it’s simply evidence that it works as designed. There can be a very different territory that causes similar measurable human responses to arise in an actual human. That territory, unlike the valves, may not be fully measurable, and it can cause other effects that are not measurable (like feeling or understanding). Depending on the philosophical view you take, manipulating valves may not even be a viable way of achieving a system that understands; it has not been shown that biological equivalent of valves is what causes understanding, all we have shown is that those entities measurably change at the same time with some measurable behavior, which isn’t a causative relationship.

  • It's not mostly mimicking, it's exactly identical. That was always the key point. Indistinguishable from the outside, one thing understands and the other doesn't.

    I feel like I could make the same arguments about the chinese room except my definition of "understanding" hinges on whether there's a tin of beans in the room or not. You can't tell from the outside, but that's the difference. Both cases with a person inside answering questions act identically and you can never design a test to tell which room has the tin of beans in.

    Now you might then say "I don't care if there's a tin of beans in there, it doesn't matter or make any sort of difference for anything I want to do", in which case I'd totally agree with you.

    > just like you cannot prove or disprove the claim that consciousness arises from chemical processes.

    Like understanding, I haven't seen a particularly useful definition of consciousness that works around the edges. Without that, talking of a claim like this is pointless.

  • I'd be fine if Searle just very simply said "we have a non-material soul and that's why we understand. Anything doing the exact same job but without a soul isn't understanding because understanding is limited entirely to things with souls in my definition".

    > A bunch of valves, like an LLM, could mostly successfully mimic human responses,

    The argument is not "mostly successfully", it's identically responding. The entire point of the chinese room is that from the outside the two things are impossible to distinguish between.