← Back to context

Comment by Zarathruster

3 hours ago

> No, his argument is that consciousness can't be instantiated purely in software, that it requires specialized hardware. Language is irrelevant, it was only an example.

It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema

Side note: while the Chinese Room put him on the map, he had as much to say about Philosophy of Language as he did of Mind. It was of more than passing interest to him.

> Instead, he believes that you could create a machine consciousness by building a brain of electronic neurons, with condensers for every biological dendrite, or whatever the right electric circuit you'd pick. He believed that this is somehow different than a simulation, with no clear reason whatsoever as to why.

I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.

I have, however, heard him say the following:

1. The structure and arrangement of neurons in the human nervous system creates consciousness.

2. The exact causal mechanism for this is phenomenon is unknown.

3. If we were to engineer a set of circumstances such that the causal mechanism for consciousness (whatever it may be) were present, we would have to conclude that the resulting entity- be it biological, mechanical, etc., is conscious.

He didn't have anything definitive to say about the causal mechanism of consciousness, and indeed he didn't see that as his job. That was to be an exercise left to the neuroscientists, or in his preferred terminology, "brain stabbers." He was confident only in his assertion that it couldn't be caused by mere symbol manipulation.

> it is in fact obvious he held dualistic notions where there is something obviously special about the mind-brain interaction that is not purely computational.

He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism:

https://faculty.wcas.northwestern.edu/paller/dialogue/proper...

> It's by no means irrelevant- the syntax vs. semantics distinction at the core of his argument makes little sense if we leave out language: https://plato.stanford.edu/entries/chinese-room/#SyntSema

The Chinese room is an argument caked in notions of language, but it is in fact about consciousness more broadly. Syntax and semantics are not merely linguistic concepts, though they originate in that area. And while Searle may have been interested in language as well, that is not what this particular argument is mainly about (the title of the article is Minds, Brains, and Programs - the first hint that it's not about language).

> I've never heard him say any such thing, nor read any word he's written attesting to this belief. If you have a source then by all means provide it.

He said both things in the paper that introduced the Chinese room concept, as an answer to the potential rebuttals.

Here is a quote about the brain that would be run in software:

> 3. The Brain Simulator reply (MIT and Berkley)

> [...] The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.

And here is the bit about creating a real electrical brain, that he considers could be conscious:

> "Yes, but could an artifact, a man-made machine, think?"

> Assuming it is possible to produce artificially a machine with a nervous system, neurons with axons and dendrites, and all the rest of it, sufficiently like ours, again the answer to the question seems to be obviously, yes. If you can exactly duplicate the causes, you could duplicate the effects. And indeed it might be possible to produce consciousness, intentionality, and all the rest of it using some other sorts of chemical principles than those that human beings use.

> He believed that consciousness is an emergent state of the brain, much like an ice cube is just water in a state of frozenness. He explains why this isn't just warmed over property dualism: https://faculty.wcas.northwestern.edu/paller/dialogue/proper...

I don't find this paper convincing. He admits at every step that materialism makes more sense, and then he asserts that still, consciousness is not ontologically the same thing as the neurobiological states/phenomena that create it. He admits that usually being causally reducible means being ontologically reducible as well, but he claims this is not necessarily the case, without giving any other example or explanation as to what justifies this distinction. I am simply not convinced.