← Back to context

Comment by siglesias

16 hours ago

Searle has written responses to dozens of replies to the Chinese Room. It's likely that you can find his rebuttals to your objection in the Stanford Encyclopedia of Philosophy's entry on the Chinese Room, or deeper in a source in the bibliography. Is your rebuttal listed here?

https://plato.stanford.edu/entries/chinese-room

> In response to this, Searle argues that it makes no difference. He suggests a variation on the brain simulator scenario: suppose that in the room the man has a huge set of valves and water pipes, in the same arrangement as the neurons in a native Chinese speaker’s brain. The program now tells the man which valves to open in response to input. Searle claims that it is obvious that there would be no understanding of Chinese.

I mean, I guess all arguments eventually boil down to something which is "obvious" to one person to mean A, and "obvious" to me to mean B.

  • Same. I feel the Chinese room argument is a nice thing to clarify thinking.

    Two systems, one feels intuitively like it understands, one doesn’t. But the two systems are functionally identical.

    Therefore either my concept of “understanding” is broken, my intuition is wrong, or the concept as a whole is not useful at the edges.

    I think it’s the last one. If a bunch of valves can’t understand but a bunch of chemicals and electrical signals can if it’s in someone’s head then I am simply applying “does it seem like biology” as part of the definition and can therefore ignore it entirely when considering machines or programs.

    Searle seems to just go the other way and I don’t under Why.

    • First point: if you imagine that the brain is doing something like collapsing the quantum wavefunction, wouldn't you say that this is a functionally relevant difference in addition to an ontologically relevant difference? It's not clear that the characteristic feature of the brain is only to compute in the classical sense. "Understanding," if it leverages quantum mechanics, might also create a guarantee of being here and now (computers and programs have no such guarantees). This is conjecture, but it's meant to stimulate imagination. What we need to get away from is the fallacy that a causal reduction of mental states to "electrical phenomena" means that any set of causes (or any substrate) will do. I don't think that follows.

      Second: the philosophically relevant point is that when you gloss over mental states and only point to certain functions (like producing text), you can't even really claim to have fully accounted for what the brain does in your AI. Even if the physical world the brain occupies is practically simulatable, passing a certain speech test in limited contexts doesn't really give you a strong claim to consciousness and understanding if you don't have further guarantees that you're simulating the right aspects of the brain properly. AI, as far as I can tell, doesn't TRY to account for mental states. That's partially why it will keep failing in some critical tasks (in addition to being massively inefficient relative to the brain).

    • Exactly. Refuting the premise of the Chinese Room is usually a sign of somebody not even willing to entertain the thought experiment. Refuting Searle's conclusion is where interesting philosophical discussions can be had.

      Personally, I'd say that there is a Chinese speaking mind in the room (albeit implemented on a most unusual substrate).

    • There are two distinct counter-arguments to this way of debunking the Chinese room experiment, not in any specific order.

      First, it is tempting to assume that a bunch of chemicals is the territory, that it somehow gives rise to consciousness, yet that claim is neither substantiated nor even scientific. It is a philosophical view called “monistic materialism” (or sometimes “naive materialism”), and perhaps the main reason this view is popular currently is that people uncritically adopt it following learning natural scientific fields, as if they made some sort of ground truth statements about the underlying reality.

      The key to remember is that this is not a valid claim in the scope of natural sciences; this claim belongs to the larger philosophy (the branch often called metaphysics). It is not a useless claim, but within the framework of natural sciences it’s unfalsifiable and not even wrong. Logically, from scientific method’s standpoint, even if it was the other way around—something like in monistic idealism, where perception of time-space and material world is the interface to (map of) conscious landscape, which was the territory and the cause—you would have no way of proving or disproving this, just like you cannot prove or disprove the claim that consciousness arises from chemical processes. (E.g., if somebody incapacitates some part of you involved in cognition, and your feelings or ability to understand would change as a result, it’s pretty transparently an interaction between your mind and theirs, just with some extra steps, etc.)

      The common alternatives to monistic materialism include Cartesian dualism (some of us know it from church) and monistic idealism (cf. Kant). The latter strikes me as the more elegant of the bunch, as it grants objective existence to the least amount of arbitrary entities compared to the other two.

      It’s not to say that there’s one truly correct map, but just to warn against mistakenly trying to make a statement about objective truth, actual nature of reality, with scientific method as cover. Natural sciences do not make claims of truth or objective reality, they make experimentally falsifiable predictions and build flawed models that aid in creating more experimentally falsifiable predictions.

      Second, what scientific method tries to build is a complete, formally correct and provable model of reality, there are some arguments that such model is impossible to create in principle. I.e., there will be some parts of the territory that are not covered by the map, and we might not know what those parts are, because this territory is not directly accessible to us: unlike a landmass we can explore in person, in this case all we have is maps, the perception of reality supplied by our mind, and said mind is, self-referentially, part of the very territory we are trying to model.

      Therefore, it doesn’t strike me as a contradiction that a bunch of valves don’t understand yet we do. A bunch of valves, like an LLM, could mostly successfully mimic human responses, but the fact that this system mimics human responses is not an indication of it feeling and understanding like a human does, it’s simply evidence that it works as designed. There can be a very different territory that causes similar measurable human responses to arise in an actual human. That territory, unlike the valves, may not be fully measurable, and it can cause other effects that are not measurable (like feeling or understanding). Depending on the philosophical view you take, manipulating valves may not even be a viable way of achieving a system that understands; it has not been shown that biological equivalent of valves is what causes understanding, all we have shown is that those entities measurably change at the same time with some measurable behavior, which isn’t a causative relationship.

      2 replies →

  • I would encourage deeply digging into the intuition that brain states and computer states are the same. Start with what you know, and then work backwards and see whether you still think they aren’t different. For example, we have an intuitive understanding of what kinds of flavors (for us) are delicious versus not. Or what kinds of sounds are pleasant versus not. Etc. If I close my eyes, I can see the color purple. I know that Nutella is delicious, and I can imagine its flavor at will. I share Searle’s intuition that the universe would be a strange place if these feelings of understanding (and pleasantness!) were simply functions not of physical states but of abstract program states. Keep in mind—what counts as a bit is simply a matter of convention. In one computer system, it could be a minute difference in voltage in a transistor. In another, it could be the presence of one element versus another. In another, it could be whether a chamber contains water or not. In another, it could be markings on a page. On and on. On the strong AI thesis, any system that runs steps in this program would not just produce functionally equivalent output to brains, but they would be forced to have mental states too, like imagining the taste of Nutella. To me, it's implausible that symbolic states FORCE mental states, or put another way that mental states are non-physical (we think about how states like pain, euphoria, drunkenness, etc, are physically modulated through drugs..you'd have to modify this to say that they're really modifying symbolic states somehow). Either the Chinese Room is missing something, our understanding of physical reality is incomplete, OR that you have to bite the bullet that the universe creates mental states when systems implement the right program—but then you’re left with the puzzle of how it is that there is a tie between the physical world and the abstract world of symbols (how can causing a mark on a page cause mental states).

    So what’s the physical cause for consciousness and understanding that is not computable? If for example you took the hypothesis that “consciousness is a sequence of microtubule-orchestrated collapses of the quantum wavefunction” [1], then you can see a series of physical requirements for consciousness and understanding that forces all conscious beings onto: 1) roughly the same clock (because consciousness shares a cause), and 2) the same reality (because consciousness causes wavefunction collapses). That’s something you could not do merely by simulating certain brain processes in a closed system.

    1) Not saying this is correct, but it invites one to imagine that consciousness could have physical requirements that play in some of the oddities of the (shared) quantum world. https://x.com/StuartHameroff/status/1977419279801954744