← Back to context

Comment by kmoser

17 hours ago

> Professor Searle concluded that psychological states could never be attributed to computer programs, and that it was wrong to compare the brain to hardware or the mind to software.

Gotta agree here. The brain is a chemical computer with a gazillion inputs that are stimulated in manifold ways by the world around it, and is constantly changing states while you are alive; a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening. The two are vastly different entities that are similar in only the most abstract ways.

Searle had an even stronger version of that belief, though: he believed that a full computational simulation of all of those gazillion inputs, being stimulated in all those manifold ways, would still not be conscious and not have a 'mind' in the human sense. The NYT obituary quotes him comparing a computer simulation of a building fire against the actual building going up in flames.

  • When I read that analogy, I found it inept. Fire is a well defined physical process. Understanding / cognition is not necessarily physical and certainly not well defined.

    • >Understanding / cognition is not necessarily physical and certainly not well defined.

      Whooha! If it's not physical what is it? How does something that's not physical interact with the universe and how does the universe interact with it? Where does the energy come from and go? Why would that process not be a physical process like any other?

    • I'd say understanding and cognition are at this point fully explainable mechanistically. (I am very excited to live in a time where I was able to change my mind on this!)

      Where we haven't made any headway on is on the connection between that and subjective experience/qualia. I feel like much of the (in my mind) strange conclusions of the Chinese Room are about that and not really about "pure" cognition.

    • That's debatable, but it is also irrelevant, as the key to the argument here is that computation is by definition an abstract and strictly syntactic construct - one that has no objective reality vis-a-vis the physical devices we use to simulate computation and call "computers" - while semantics or intentionality are essential to human intelligence. And no amount of syntax can somehow magically transmute into semantics.

      3 replies →

    • Do you believe that there are things that are not physical? Extraordinary claims require extraordinary evidence. And no, "science can't explain x hence metaphysical" is not a valid response.

    • But that acknowledgement would itself lend Searle's argument credence because much of the brain = computer thesis depends on a fundamental premise that both brains and digital computers realize computation under the same physical constraints; the "physical substrate" doesn't matter (and that there is necessarily nothing special about biophysical systems beyond computational or resource complexity) (the same thinking by the way, leads to arguments that an abacus and a computer are essentially "the same"—really at root these are all fallacies of unwarranted/extremist abstraction/reductionism)

      The history of the brain computer equation idea is fascinating and incredibly shaky. Basically a couple of cyberneticists posed a brain = computer analogy back in the 50s with wildly little justification and everyone just ran with it anyway and very few people (Searle is one of those few) have actually challenged it.

      4 replies →

Unless human brains exceeds the Turing computable, they're still computationally equivalent, and we have no indication exceeding the Turing computable is even possible.

  • A Turing machine operates serially on a fixed set of instructions. A human brain operates in parallel on inputs that are constantly changing. The underlying mechanism is completely different. The human brain is far, far more than a mere computation device.

    Efforts to reproduce a human brain in a computer are currently at the level of a cargo cult: we're simulating the mechanical operations, without a deep understanding of the underlying processes which are just as important. I'm not saying we won't get better at it, but so far we're nowhere near producing a brain in a computer.

I think the statement above and yours both seem to ignore “Turing complete” systems, which would indicate that a computer is entirely capable of simulating the brain, perhaps not before the heat death of the universe, that’s yet to be proven and depends a lot on what the brain is really doing underneath in terms of crunching.

  • This depends on the assumption that all brain activity is the process of realizing computable functions. I'm not really aware of any strong philosophical or neurological positions that has established this beyond dispute. Not to resurrect vitalism or something but we'd first need to establish that biological systems are reducible to strictly physical systems. Even so, I think there's some reason to think that the highly complex social historical process of human development might complicate things a bit more than just brute force "simulate enough neurons". Worse, whose brain exactly do you simulate? We are all different. How do we determine which minute differences in neural architecture matter?

    • > we'd first need to establish that biological systems are reducible to strictly physical systems.

      Or even more fundamentally, that physics captures all physical phenomena, which it doesn't. The methods of physics intentionally ignore certain aspects of reality and focus on quantifiable and structural aspects while also drawing on layers of abstractions where it is easy to mistakenly attribute features of these abstractions to reality.

      3 replies →

That's a quantitative distinction at most, since computationally both are equivalent (as both can simulate each other's basic components).

And what's a few orders of magnitudes in implementation efficiency among philosophers?

a computer is a digital processor that works work with raw data, and tends to be entirely static when no processing is happening.

This depends entirely on how it's configured. Right now we've chosen to set up LLMs as verbally acute Skinner boxes, but there's not reason you can't set up a computer system to be processing input or doing self-maintenance (ie sleep) all the time.

So you’re saying a brain is a computer, right?

  • In the sense that it can perform computations, yes. But the underlying mechanisms are vastly different from a modern digital computer, making them extremely different devices that are alike in only a vague sense.

    • I have always wondered if we would be capable of writing down the mechanisms that power our thoughts. I think that this was one of the ideas that bubbled up from reading Godel Escher Bach many years ago. Is it possible for us to express the machine that makes us using the outputs of that machine in the way that it's not possible to write second order logic using first order logic.

      Of course, also there are processes that are not expressible as computations, but those of these that I know about seem very very distant from human thought, and it seems very very improbable that they could be implemented with a brain. I also think that these are not observed in our universe so far.

Yes. I took an introneuroscience course a few years ago. Even to understand what is happening in one neuron during one input from one dendrite requires differential equations. And there are postive and negative inputs and modulations... it is bewildering! And how many billions of neurons with hundreds of interactions with surrounding neurons? And bundles of them, many still unknown?

  • Do you need differential equations to understand what’s happening in a transistor?

  • Searle was known for the Chinese Room experiment, whicb demonstrated language in its translational states to be strong enclitic feature of various judgements of the intermediary.

    • Searle also has almost 20 books, most written after 1980 and the Chinese room. None that I have read are pop science NYT best seller types. I suspect that is why most people only know the Chinese room. His depth of thought was much more than the Chinese Room.

    • >translational states to be strong enclitic feature of various judgements of the intermediary

      I don't understand, could you explain what you mean?

      I looked up enclitic - it seems to mean the shortening of a word by emphasizing another word, I can't understand why this would apply to the judgements of an intermediary