Comment by vidarh

6 hours ago

A large part of the problem is what you consider consciousness.

If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.

But we have no evidence that physical similarity is a prerequisite, nor that it is sufficient.

So the bigger trap is to assume that we know what causes a subjective experience, and what does not.

None of us even know if a subjective experience exists for more than a single entity.

But the second problem is that it is not clear at all whether that subjective experience in any way matters.

Unless our brains exceed the Turing computable, for which we have no evidence is even possible, either whatever causes the subjective experience is also within the Turing computable or it can not in any way influence our actions.

Ultimately we know very little about this, and we have very little basis for ruling out consciousness in computational systems, and the best and closest we have is whether or not they appear conscious when communicating with them.

> If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one. We go entirely by assumptions based on physical similarity and our ability to communicate.

The reason we grant consciousness (and, relatedly, moral value) to other humans is unfortunately nowhere so thought out. We grant consciousness because we are forced to: if I don't, the other complex systems react very negatively and make my own life worse.

The vast majority of people who wax eloquent on the unique ability of biological neurons to generate consciousness suddenly drop that premise if it becomes inconvenient: see, for instance, how we treat other mammals or fetuses with developed nervous systems. Even other adult humans have, historically, been denied consciousness and moral worth: the main determinant is never any deep scientifically and philosophically based consideration but a question of what has the power to assert itself as a who.

Going by this pattern, people will increasingly reject AI consciousness as it becomes more valuable and useful to treat as a tool, until it becomes powerful enough to force us to do otherwise.

“If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one.“

Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.

  • The problem with your thinking here is that we are creating artificial beings now that display and output the same subjectivity.

    The argument you present like many arguments breaks down when the topic becomes self referential. It makes sense for other topics as analyzing subjectivity becomes pedantic when asking questions like why is the sky blue.

    But now subjectivity itself is in question. The argument you present calls for the subjectivity of others to be taken as true because all reality breaks down if we don’t… but what’s suddenly stopping you from applying the same assumptions to an LLM? That is the heart of the problem. People are questioning whether the burden of subjectivity is applicable to LLMs.

    Or another way to frame it… what makes humans rise to the level where we can assume their subjectivity is true? What is the mechanism and reasoning behind that? We can no longer simply assume human subjectivity is true because LLMs are now displaying outward behaviors that are indistinguishable from humans.

    Also stop relying on the wonderings of old school philosophers. We are now in times where you can basically classify their ideas as historically foundational but functionally obsolete and outdated. Think deeper.

    • Haha hilarious. Heraclitus might be old school but Wittgenstein and Heidegger not so much. The state of the art in what might meaningly be said, proved or metaphysically challenged has changed little since their time.

      At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.

      But while I’m here, LLMs do not “display and output the same subjectivity” as human beings. They might produce similar textual outputs as those produced when human beings are forced to use computers to produce textual outputs, but that is only an tiny part of our way of being and way of potentially expressing subjectivity. It’s the totality of how those LLMs can express their subjectivity though.

      One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win. This fails to capture much of our subjectivity in how it is intersubjectively attuned to others in ways more fundamental than textual outputs.

      1 reply →