Comment by block_dagger

19 hours ago

When I read that analogy, I found it inept. Fire is a well defined physical process. Understanding / cognition is not necessarily physical and certainly not well defined.

>Understanding / cognition is not necessarily physical and certainly not well defined.

Whooha! If it's not physical what is it? How does something that's not physical interact with the universe and how does the universe interact with it? Where does the energy come from and go? Why would that process not be a physical process like any other?

I'd say understanding and cognition are at this point fully explainable mechanistically. (I am very excited to live in a time where I was able to change my mind on this!)

Where we haven't made any headway on is on the connection between that and subjective experience/qualia. I feel like much of the (in my mind) strange conclusions of the Chinese Room are about that and not really about "pure" cognition.

Simulated fire would burn down simulated building

  • If everything is simulated then "simulated(x)" is a vacuous predicate & tells you nothing so you might as well throw it away & speak directly in terms of the objects instead of wrapping/prepending everything w/ "simulated".

That's debatable, but it is also irrelevant, as the key to the argument here is that computation is by definition an abstract and strictly syntactic construct - one that has no objective reality vis-a-vis the physical devices we use to simulate computation and call "computers" - while semantics or intentionality are essential to human intelligence. And no amount of syntax can somehow magically transmute into semantics.

  • This makes no sense. You could equally make the statement that thought is by definition an abstract and strictly syntactic construct - one that has no objective reality. Neither statement is supported by anything.

    There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.

    • > This makes no sense. You could equally make the statement that thought is by definition an abstract and strictly syntactic construct - one that has no objective reality.

      No.

      I could jam a yardstick into the ground and tell you that it's now a sundial calculating the time of day. Is this really, objectively true? Of course not. It's true to me, because I deem it so, but this is not a fact of the universe. If I drop dead, all meaning attributed to this yardstick is lost.

      Now, thoughts. At the moment I'm visualizing a banana. This is objectively true: in my mind's eye, there it is. I'm not shuffling symbols around. I'm not pondering the abstract notion of bananas, I'm experiencing the concretion of one specific imaginary banana. There is no "depends on how you look at it." There's nothing to debate.

      > There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.

      There's no "magic" because this isn't a thing. You can't transmute syntax into semantics any more than you can transmute the knowledge of Algebra into the sensation of a cool breeze on a hot summer day. This is a category error.

      1 reply →

    • You claim it makes no sense, but don't give a good reason why it wouldn't.

      > You could equally make the statement that thought is by definition an abstract and strictly syntactic construct - one that has no objective reality.

      This is what makes no sense, as I am not merely posing arbitrary definitions, but identifying characteristic features of human intelligence. Do you deny semantics and intentionality are features of the human mind?

      > There's also no "magic" involved in transmuting syntax into semantics, merely a subjective observer applying semantics to it.

      I have no idea what this means. The point is that computation as we understand it in computer science is purely syntactic (this was also Searle's argument). Indeed, it is modeled on the mechanical operations human computers used to perform without understanding. This property is precisely what makes computation - thus understood - mechanizable. Because it is purely syntactic and an entirely abstract model, two things follow:

      1. Computation is not an objectively real phenomenon that computers are performing. Rather, physical devices are used to simulate computation. Searle calls computation "observer relative". There is nothing special about electronics, as we can simulate computation using wooden gears that operate mechanically or water flow or whatever. But human intelligence is objectively real and exists concretely, and so it cannot be a matter of mere simulation or something merely abstract (it is incoherent and self-refuting to deny this for what should be obvious reasons).

      2. Because intentionality and the capacity for semantics are features of human intelligence, and computation is purely syntactic, there is no room in computation for intelligence. It is an entirely wrong basis for understanding intelligence and in a categorical sense. It's like trying to find out what arrangement of LEGO bricks can produce the number π. Syntax has no "aboutness" as that is the province of intentionality and semantics. To deny this is to deny that human beings are intelligent, which would render the question of intelligence meaningless and frankly mystifying.

      1 reply →

Do you believe that there are things that are not physical? Extraordinary claims require extraordinary evidence. And no, "science can't explain x hence metaphysical" is not a valid response.

But that acknowledgement would itself lend Searle's argument credence because much of the brain = computer thesis depends on a fundamental premise that both brains and digital computers realize computation under the same physical constraints; the "physical substrate" doesn't matter (and that there is necessarily nothing special about biophysical systems beyond computational or resource complexity) (the same thinking by the way, leads to arguments that an abacus and a computer are essentially "the same"—really at root these are all fallacies of unwarranted/extremist abstraction/reductionism)

The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.

  • Unless you can show an example of how we can compute something that is not Turing computable, there is no justification for the inverse, as the inverse would require something in the brain to be capable of interactions that can not be simulated. And we've no evidence to suggest either that the brain can do something not Turing computable or of the presence of something in the brain that can't be simulated.

  • Maybe consciousness is exactly like simulated fire. It does a lot inside the simulation, but is nothing on the outside.

  • > The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.

    And something that often happens whenever some phenomenon falls under scientific investigation, like mechanical force or hydraulics or electricity or quantum mechanics or whatever.

Isn't that besides the point? The point is that something would actually burn down.

  • GP's point is that buring something down is by definition something that requires a specific physical process. It's not obvious that thinking is the same. So when someone says something like "just as a simulation of fire isn't the same as an actual fire (in a very important way!), a simulation of thinking isn't the same as actual thinking" they're arguing circularly, having already accepted their conclusion that both acts necessarily require a specific physical process. Daniel Dennett called this sort of argument an "intuition pump", which relies on a misleading but intuitive analogy to get you to accept an otherwise-difficult-to-prove conclusion.

    To be fair to Searle, I don't think he advanced this as an agument, but more of an illustration of his belief that thinking was indeed a physical process specific to brains.

    • He explains it in the original paper¹ & says in no uncertain terms that he believes the brain is a machine & minds are implementable on machines. What he is actually arguing is that substrate independent digital computation will never be a sufficient explanation for conscious experience. He says that brains are proof that consciousness is physical & mechanical but not digital. Searle is not against the computationalist hypothesis of minds, he admits that there is nothing special about minds in terms of physical processes but he doesn't reduce everything to substrate independent digital computation & conclude that minds are just software running on brains. There are a bunch of subtle distinctions that people miss when they try to refute Searle's argument.

      ¹https://home.csulb.edu/~cwallis/382/readings/482/searle.mind...

      1 reply →