Comment by HAL3000

4 days ago

> Neocortical networks, with thalamic and hippocampal system integrations, are sufficient to explain the entirety of human experience, in principle.

Where did you get that? That's not an established scientific theorem, it's a philosophical stance (strong physicalist functionalism) expressed as if it were empirical fact. We cannot simulate a full human brain at the correct level of detail, record every spike and synaptic change in a living human brain and we do not have a theory that predicts which neural organizations are conscious just from first principles of physics and network topology.

> We can induce emotions, sights, sounds, smells, memories, moods, pleasure, pain, and anything you can experience through targeted stimulation of neurons in the brain

That shows dependence of experience on brain activity but dependence is not the same thing as reduction or explanation. We know certain neural patterns correlate with pain, color vision, memories, etc. we can causally influence experience by interacting with the brain.

But why any of this electrical/chemical stuff is accompanied by subjective experience instead of just being a complex zombie machine? The ability to toggle experiences by toggling neurons shows connection and that's it, it doesn't explain anything.

> We've got a good enough handle on physics to know that it's not some weird quantum thing, it's not picking up radio signals from some other dimension, and it's not some sort of spirit or mystical phlogiston.

We do have a good handle on how non conscious physical systems behave (engines, circuits, planets, whatever) But we don't have any widely accepted physical theory that derives subjective experience from physical laws. We don't know which physical/computational structures (if any) are sufficient and necessary for consciousness.

You are assuming without any evidence that current physics + it's "all computation" already gives a complete ontology of mind. So what is the consciousness? define it with physics, show me equations, you can't.

> It's a computer, in the sense that anything that processes information is a computer. It's not much like silicon chips or the synthetic computers we build, as far as specific implementation details go.

We design transformer architectures, we set the training objectives, we can inspect every weight and activation of a LLM. Yet even with all that access, tens of thousands of ML PhDs,years of work and we still don't fully understand why these models generalize the way they do, why they develop certain internal representations and how exactly particular concepts are encoded and combined.

If we struggle to interpret a ~10^11 parameter transformer whose every bit we can log and replay, it's a REAL hubris to act like we've basically got a 10^14-10^15 synapse constantly rewiring, developmentally shaped biological network to the point of confidently saying "we know there's nothing more to mind than this, case closed lol".

Our ability to observe and manipulate the brain is currently far weaker than our ability to inspect artificial nets and even those are not truly understood at a deep mechanistic concept level explanatory sense.

> Your mind is the state of your brain as it processes information.

Ok but then you have a problem, if anything that processes information is a computer, and mind is "just computation" then which computations are conscious?

Is my laptop conscious when it runs a big simulation? Is a weather model conscious? Are all supercomputers conscious by default just because they flip bits at scale?

If you say yes, you've gone to an extreme pancomputationalism that most people (including most physicalists) find extremely implausible.

If you say no, then you owe a non hand wavy criterion, what's the principled difference, in purely physical/computational terms between a conscious system (human brain) and a non conscious but still massively computational system (weather simulation, supercomputer cluster)? That criterion is exactly the kind of thing we don't have yet.

So saying "it’s just computation" without specifying which computations and why they give rise to a first person point of view leaves the fundamental question unanswered.

And one more thing your gasoline analogy is misleading, combustion never presented a "hard problem of combustion" in the sense of a first person, irreducible qualitative aspect. People had wrong physical theories, but once chemistry was in place, everything was observable from the outside.

Consciousness is different, you can know all the physical facts about a brain state and still not obviously see why it should feel like anything at all from the inside.

That's why even hardcore physicalist philosophers talk about the "explanatory gap". Whether or not you think it's ultimately bridgeable, it's not honest to say the gap is already closed and the scientific explanation is "sufficient".

Well said.

We can stimulate a nerve and create the experience of pain, but stimulating the nerve does not create the memory of pain.

Nerves triggering sensations I can understand. But stimulating the same nerves, or creating the same electrical activations does not create the memory of the pain.

  • I don't know why you single out memories.

    Certainly, if some mad scientist were to stimulate via an electrode some parts of your brain to make you experience pain, you will remember it. Also, it's not unreasonable to assume that it would be equally feasible to create fake memories by stimulating other parts.

    If there is a hard problem, memory is not it.

    • It's some of the lower hanging fruit that is complex enough that we don't understand, but we don't have to question thoughts, or consciousness.

      What I'm saying isn't that if you stimulate pain, you won't remember it.

      False memories would be a challenge. How would you input a memory of going up a tree to rescue a cat. Where would you even begin?

      1 reply →