← Back to context

Comment by Zarathruster

10 hours ago

> The Chinese room is an argument caked in notions of language, but it is in fact about consciousness more broadly.

At this point I'm pretty sure we've had a misunderstanding. When I referred to "language" in my original post, you seem to have construed this as a reference to the Chinese language in the thought experiment. On the contrary, I was referring to software specifically, in the sense that a computer program is definitionally a sequence of logical propositions. In other words, a speech act.

> [...] The problem with the brain simulator is that it is simulating the wrong things about the brain.

This quote is weird and a bit unfortunate. It seems to suggest an opening: the brain simulator doesn't work because it simulates the "wrong things," but maybe a program that simulates the "right things" could be conscious. Out of context, you could easily reach that conclusion, and I suspect that if he could rewrite that part of the paper he probably would, because the rest of the paper is full of blanket denials that any simulation would be sufficient. Like this one: >The idea that computer simulations could be the real thing ought to have seemed suspicious in the first place because the computer isn't confined to simulating mental operations, by any means. No one supposes that computer simulations of a five-alarm fire will burn the neighborhood down or that a computer simulation of a rainstorm will leave us all drenched. Why on earth would anyone suppose that a computer simulation of understanding actually understood anything? It is sometimes said that it would be frightfully hard to get computers to feel pain or fall in love, but love and pain are neither harder nor easier than cognition or anything else. For simulation, all you need is the right input and output and a program in the middle that transforms the former into the latter. That is all the computer has for anything it does. To confuse simulation with duplication is the same mistake, whether it is pain, love, cognition, fires, or rainstorms.

Regarding the electrical brain:

> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.

Right, so he describes one example of an "electrical brain" that seems like it'd satisfy the conditions for consciousness, while clearly remaining open to the possibility that a different kind of artificial (non-electrical) brain might also be conscious. I'll assume you're using this quote to support your previous statement:

> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.

I think it's fairly obvious why this is different from a simulation. If you build a system that reproduces the consciousness-causing mechanism of neurons, then... it will cause consciousness. Not simulated consciousness, but the real deal. If you build a robot that can reproduce the ignition-causing mechanism of a match striking a tinderbox, then it will start a real fire, not a simulated one. You seem to think that Searle owes us an explanation for this. Why? How are simulations even relevant to the topic?

> I don't find this paper convincing.

The title of the paper is "Why I Am Not a Property Dualist." Its purpose is to explain why he's not a property dualist. Arguments against materialism are made in brief.

> He admits at every step that materialism makes more sense

Did we read the same paper?

> He admits that usually being causally reducible means being ontologically reducible as well,

Wrong, but irrelevant

> but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction.

Examples and explanations are easy to provide, because there are several:

> But in the case of consciousness, causal reducibility does not lead to ontological reducibility. From the fact that consciousness is entirely accounted for causally by neuron firings, for example, it does not follow that consciousness is nothing but neuron firings. Why not? What is the difference between consciousness and other phenomena that undergo an ontological reduction on the basis of a causal reduction, phenomena such as color and solidity? The difference is that consciousness has a first person ontology; that is, it only exists as experienced by some human or animal, and therefore, it cannot be reduced to something that has a third person ontology, something that exists independently of experiences. It is as simple as that.

First-person vs. third-person ontologies are the key, whether you buy them or not. Consciousness is the only possible example of a first-person ontology, because it's the only one we know of

> “Consciousness” does not name a distinct, separate phenomenon, something over and above its neurobiological base, rather it names a state that the neurobiological system can be in. Just as the shape of the piston and the solidity of the cylinder block are not something over and above the molecular phenomena, but are rather states of the system of molecules, so the consciousness of the brain is not something over and above the neuronal phenomena, but rather a state that the neuronal system is in.

I could paste a bunch more examples of this, but the key takeaway is that consciousness is a state, not a property.

> On the contrary, I was referring to software specifically, in the sense that a computer program is definitionally a sequence of logical propositions. In other words, a speech act.

I think this muddies the water unnecessarily. Computation is not language, even though we typically write software in so called programming languages. But the computation itself is something different from the linguistic-like description of software. The computation is the set of states, and the relationships between them, that a computer goes through.

> > He admits at every step that materialism makes more sense

> Did we read the same paper?

I should have been clearer - I meant that he admits that materialism makes more sense than idealism or property dualism, but I realize that this comes off as suggesting it makes more sense than his own position, which of course he does not.

> > He admits that usually being causally reducible means being ontologically reducible as well,

> Wrong, but irrelevant

Both you and he seem to find a single example of a phenomenon that is causally reducible to some constituent part, but that is not ontological reducible to that constitutent part - consciousness (he would add intentionality, I think, given the introduction, but it's not clear to me this is even a meaningfully separatable concept from consciousness). And you both claim that this is the case because of this special feature of "first person ontology", which is a different thing than "third person ontology" - which seems to me to simply be dualism by another name.

I think it's entirely possible to reject the notion of a meaningful first person ontology completely. It's very possible that the appearance of a first person narrative that we experience is a retroactive illusion we create that uses our models of how other people function on ourselves. That is, we are simple computers that manipulate symbols in our brains, that generate memories of their recent state as being a "conscious experience", which is just what we invented as a model of why other animals and physical phenomena more broadly behave the way they do (since we intuitively assign emotions and intentions to things like clouds and fires and mountains, to explain their behavior).

  • > I think this muddies the water unnecessarily. Computation is not language, even though we typically write software in so called programming languages. But the computation itself is something different from the linguistic-like description of software. The computation is the set of states, and the relationships between them, that a computer goes through.

    In hindsight, choosing the word "language" was probably more distracting than helpful. We could get into a debate about whether computation is essentially another form of language-like syntactic manipulation, but it does share a key feature with language: observer-relative ontology. @mjburgess has already made this case with you at length, and I don't think I could improve on what's already been written, so I'll just leave it at that.

    > I should have been clearer - I meant that he admits that materialism makes more sense than idealism or property dualism, but I realize that this comes off as suggesting it makes more sense than his own position, which of course he does not.

    I'm not sure that I saw this specific claim made, but it's not especially important. What's more important is understanding what his objection to materialism is, such that you can a)agree with it or b)articulate why you think he's wrong. That said, it isn't the main focus of this paper, so the argument is very compressed. It also rests on the assumption that you believe that consciousness is real (i.e. not an illusion), and given the rest of your comment, I'm not sure that you do.

    > Both you and he seem to find a single example of a phenomenon that is causally reducible to some constituent part, but that is not ontological reducible to that constitutent part - consciousness

    Yes, although to be clear, I'm mainly interested in correctly articulating the viewpoint expressed in the paper. My own views don't perfectly overlap with Searle's

    > (he would add intentionality, I think, given the introduction, but it's not clear to me this is even a meaningfully separatable concept from consciousness)

    I doubt he'd add it as a discrete entry because, as you correctly observe, intentionality is inseparable from consciousness (but the reverse is not true)

    > And you both claim that this is the case because of this special feature of "first person ontology", which is a different thing than "third person ontology" - which seems to me to simply be dualism by another name.

    Ok good, this is directly interacting with the paper's thesis: why he's not a (property) dualist. He's trying to thread the needle between materialism and dualism. His main objection to property dualism is that consciousness doesn't exist "over and above" the brain, on which it is utterly dependent. This is probably his tightest phrasing of his position:

    > The property dualist means that in addition to all the neurobiological features of the brain, there is an extra, distinct, non physical feature of the brain; whereas I mean that consciousness is a state the brain can be in, in the way that liquidity and solidity are states that water can be in.

    Does his defense work for you? Honestly I wouldn't blame you if you said no. He spends a full third of the paper complaining about the English language (this is a theme) and how it prevents him from cleanly describing his position. I get it, even if I find it a little exhausting, especially when the stakes are starting to feel kinda low.

    > I think it's entirely possible to reject the notion of a meaningful first person ontology completely.

    On first reading, this sounds like you might be rejecting the idea of consciousness entirely. Or do you think it's possible to have a 'trivial' first person ontology?

    > It's very possible that the appearance of a first person narrative that we experience is a retroactive illusion we create that uses our models of how other people function on ourselves. That is, we are simple computers that manipulate symbols in our brains, that generate memories of their recent state as being a "conscious experience", which is just what we invented as a model of why other animals and physical phenomena more broadly behave the way they do (since we intuitively assign emotions and intentions to things like clouds and fires and mountains, to explain their behavior).

    I'm not sure where to start with this, so I'll just pick a spot. You seem to deny that "conscious experience" is a real thing (which is equivalent to "what it's like to be a zombie") but we nonetheless have hallucinated memories of experiences which, to be clear, we did not have because we don't really have conscious experiences at all. But how do we replay those memories without consciousness? Do we just have fake memories about remembering fake memories? And where do the fake fake fake memories get played, in light of the fact that we have no inner lives except in retrospect?