← Back to context

Comment by encyclopedism

1 month ago

> If we can't define, recognize or measure them, how exactly do we know that AI doesn't have them?

In the same way my digital thermometer doesn't have quaila. LLM's do not either. I really tire of this handwaving 'magic' concepts into LLM's.

Qualia being difficult to define and yet being such an immediate experience that we humans all know intimately and directly is quite literally the problem. Attempted definitions fall short and humans have tried and I mean really tried hard to solve this.

Please see Hard problem of consciousness https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

> In the same way my digital thermometer doesn't have quaila

And I repeat the question: how do you know your thermometer doesn't? You don't, you're just declaring a fact you have no basis for knowing. That's fine if you want a job in a philosophy faculty, but it's worthless to people trying to understand AI. Again, c.f. furffle. Thermometers have that, you agree, right? Because you can't prove they don't.

  • You're just describing panpsychism, which itself is the subject of much critique due to its nonfalsifiability and lack of predictive power. Not to mention it ignores every lesson we've learned in cognition thus far.

    A thermometer encoding "memory" of a temperature is completely different than a thermometer on a digital circuit, or a thermometer attached to a fully-developed mammalian brain. Only the latter of this set for sure has the required circuitry to produce qualia, at least as far as I can personally measure without invoking solipsism.

    It's also very silly to proclaim that philosophy of mind is not applicable to increasingly complex thinking machines. That sounds like a failure to consider the bodies of work behind both philosophy of mind and machine cognition. Again, "AI" is ill-defined and your consistent usage of that phrase instead of something more precises suggests you still have a long journey ahead of you for "understanding AI".

  • God, can we fucking quit with this "philosophy is bullshit" stuff. Like there are literally Faculty in Philosophy all over the world trying to understand AI. Philosophy faculty do stuff, they try to understand things, most of the ideas we are talking about here came from philosophers.

    • Philosophy seems a term generally reserved for the stuff we don't understand yet and so is inherently kind of speculative. Once you have a definite answer it gets called science instead.

      10 replies →

    • > Like there are literally Faculty in Philosophy all over the world trying to understand AI.

      There surely are. The problem is that they are failing. While the practical nerds are coming up with some pretty good ideas.

      And this was what philosophy was supposed to be for! Like, they've been arguing on their pins for centuries about the essence of consciousness and the uniqueness of the human condition and whatnot. AND HERE WE ARE AT THE DAWN OF NON-HUMAN INTELLIGENCE AND THEY HAVE NOTHING USEFUL TO SAY.

      Basically at what point do we just pack it in and admit we all fucked up?

    • It seems to me that 'Philosophy is meaningless' has been ingrained into so many people it's almost propaganda-esque!

      To see this sentiment from supposed 'scientific' individuals is shocking. I wonder if they could define what science actually is.

      5 replies →

The problem is that just like your digital thermometer, 50 human brain neurons in a petri dish "obviously" don't have qualia either.

So you end up either needing to draw a line somewhere between mechanical computation and qualia computation, or you can relegate it to supernatural (a soul) or grey areas (quantum magic).

  • What I'm trying to tease out is isn't an opinion alone. It's a generally understood problem in the scientific community. I'm highlighting it to illustrate the issues at hand.

    > So you end up either needing to draw a line somewhere between mechanical computation and qualia computation, or you can relegate it to supernatural (a soul) or grey areas (quantum magic).

    Quite literally the jury is still out. It is a hotly debated topic approached from various angles. Arguments are nuanced which is why you fill find ideas such as panpsychism thrown into the mix. I hate appealing to authority but in this instance it is more than warranted. Humans have grappled with this for centuries and the problem hasn't gone away.

    Please see: https://en.wikipedia.org/wiki/Hard_problem_of_consciousness

    • >In the same way my digital thermometer doesn't have quaila. LLM's do not either.

      The hard problem of consciousness doesn't support either of those statements, and instead illustrates why they can't confidently be made.

      So it's confusing because you seem to recognize that qualia cannot currently be measured, while also making a statement measuring qualia.

      4 replies →

    • > The meta-problem of consciousness is (to a first approximation) the problem of explaining why we think that there is a [hard] problem of consciousness.

      And thus we have this sprawling discussion. :)

  • I think there are several lines. Phase changes happen relatively suddenly, when a system or subsystem reaches a critical threshold. The experience of "qualia" certainly involves many such phase changes as a complex, dynamical system grows in complexity while maintaining stability.

    A sufficiently complex organism lacking eyes but having light-sensitive organs still experiences qualia if you define it the right way. But do they experience heartbreak like I do? It isn't an all-or-nothing situation, even if we don't yet know where these lines are.

    This supports the idea that subjective consciousness emerges from complexity in systems that have sensory feedback loops. The simpler the system, the smaller the qualia space.