Comment by gizajob

4 hours ago

“If you talk about having a subjective experience, then we don't know of any way to prove that even other humans than ourselves have one.“

Wittgenstein kinda blows this burden of proof apart. Just because you can doubt something like the subjectivity of others to the point where it needs to be reconstructed from proofs, that’s an issue with the doubting experiment more than the subjectivity. Others possessing Subjectivity is the kind of hinge certainty upon which your world is constructed, it’s not a proof worthy endeavour to doubt it - it’s something you’re certain is the case. If it wasn’t then pretty well everything else about reality would be in doubt and needing constant reconstruction from proofs, which is an exercise in madness and futility, not philosophy. There’s really nothing in your experience where others not possessing subjective experiences of some kind really arises, except for the philosophical exercise of doubting and requiring epistemological proofs which can’t ever exist in the face of a relentless and unconvincable doubter. Heidegger talks about pretty well the same idea as Wittgenstein.

The problem with your thinking here is that we are creating artificial beings now that display and output the same subjectivity.

The argument you present like many arguments breaks down when the topic becomes self referential. It makes sense for other topics as analyzing subjectivity becomes pedantic when asking questions like why is the sky blue.

But now subjectivity itself is in question. The argument you present calls for the subjectivity of others to be taken as true because all reality breaks down if we don’t… but what’s suddenly stopping you from applying the same assumptions to an LLM? That is the heart of the problem. People are questioning whether the burden of subjectivity is applicable to LLMs.

Or another way to frame it… what makes humans rise to the level where we can assume their subjectivity is true? What is the mechanism and reasoning behind that? We can no longer simply assume human subjectivity is true because LLMs are now displaying outward behaviors that are indistinguishable from humans.

Also stop relying on the wonderings of old school philosophers. We are now in times where you can basically classify their ideas as historically foundational but functionally obsolete and outdated. Think deeper.

  • Haha hilarious. Heraclitus might be old school but Wittgenstein and Heidegger not so much. The state of the art in what might meaningly be said, proved or metaphysically challenged has changed little since their time.

    At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.

    But while I’m here, LLMs do not “display and output the same subjectivity” as human beings. They might produce similar textual outputs as those produced when human beings are forced to use computers to produce textual outputs, but that is only an tiny part of our way of being and way of potentially expressing subjectivity. It’s the totality of how those LLMs can express their subjectivity though.

    One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win. This fails to capture much of our subjectivity in how it is intersubjectively attuned to others in ways more fundamental than textual outputs.

    • > At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.

      You don’t need to mention this. The context is LLMs I am saying your claim is pointless in context. The subjectivity of others is completely relevant because it is the topic of subjectivity itself that is in question. Get it? You didn’t counter my own counter and instead you moved onto side topics.

      > But while I’m here, LLMs do not “display and output the same subjectivity” as human beings.

      Again… you are side tracking here and not really responding to me.

      The argument solely is within the confines of text. That’s obvious. No need to take it beyond that. You assume I am conscious because of the text your reading from me and I assume the same from you and it is within that same frame we are evaluating the LLM. Nothing beyond that. You can’t in actuality know my experience goes beyond text because that information is not open to you. But it is obvious you assume I’m conscious and not a rock because you are responding to me. So the question is why are you not engaging in a similar debate with the LLM?

      > One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win.

      It’s not a failure. It was the point. They want to remove superfluous features and gun for the most narrow definition of agi.

      You like philosophy and you read texts on the topic. That means you obviously find the subjectivity in those texts relevant and produced by a high intelligence. But that’s all through only text. You evaluate my statements and the statements of your idolized philosophers solely from text and that is all you’ve ever used. So YOU yourself find validation from text as do many humans and that is sufficient evidence in determining whether a thing is conscious and your own behavior validates this logically even though your mouth is constantly moving the goal posts whenever AI jumps over a new hurdle.

      That is what the Turing test is gunning for. It used to be that intelligence was the ability to think and understand now it has to encompass the totality of human sensation because people are refusing to face the reality of impending agi.

    • How so? If a person were confined to text only (a la Hawkins), does that qualify us to dismiss their subjectivity on the basis of the medium? Also, why can training not be at least analogized to the attunement to the popular intersubjective perception?