Comment by gizajob
2 hours ago
Haha hilarious. Heraclitus might be old school but Wittgenstein and Heidegger not so much. The state of the art in what might meaningly be said, proved or metaphysically challenged has changed little since their time.
At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.
But while I’m here, LLMs do not “display and output the same subjectivity” as human beings. They might produce similar textual outputs as those produced when human beings are forced to use computers to produce textual outputs, but that is only an tiny part of our way of being and way of potentially expressing subjectivity. It’s the totality of how those LLMs can express their subjectivity though.
One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win. This fails to capture much of our subjectivity in how it is intersubjectively attuned to others in ways more fundamental than textual outputs.
> At no point in my post did I mention artificial beings or LLMs. I made a counter claim about the need for proof towards the subjectivity of others.
You don’t need to mention this. The context is LLMs I am saying your claim is pointless in context. The subjectivity of others is completely relevant because it is the topic of subjectivity itself that is in question. Get it? You didn’t counter my own counter and instead you moved onto side topics.
> But while I’m here, LLMs do not “display and output the same subjectivity” as human beings.
Again… you are side tracking here and not really responding to me.
The argument solely is within the confines of text. That’s obvious. No need to take it beyond that. You assume I am conscious because of the text your reading from me and I assume the same from you and it is within that same frame we are evaluating the LLM. Nothing beyond that. You can’t in actuality know my experience goes beyond text because that information is not open to you. But it is obvious you assume I’m conscious and not a rock because you are responding to me. So the question is why are you not engaging in a similar debate with the LLM?
> One of the main failures of the Turing test (and why it is “old school” and invalid), and Turing’s consideration of humans, is that it forces us to demonstrate the totality of our subjectivity on the only playing field where a computer might possibly match us or win.
It’s not a failure. It was the point. They want to remove superfluous features and gun for the most narrow definition of agi.
You like philosophy and you read texts on the topic. That means you obviously find the subjectivity in those texts relevant and produced by a high intelligence. But that’s all through only text. You evaluate my statements and the statements of your idolized philosophers solely from text and that is all you’ve ever used. So YOU yourself find validation from text as do many humans and that is sufficient evidence in determining whether a thing is conscious and your own behavior validates this logically even though your mouth is constantly moving the goal posts whenever AI jumps over a new hurdle.
That is what the Turing test is gunning for. It used to be that intelligence was the ability to think and understand now it has to encompass the totality of human sensation because people are refusing to face the reality of impending agi.
How so? If a person were confined to text only (a la Hawkins), does that qualify us to dismiss their subjectivity on the basis of the medium? Also, why can training not be at least analogized to the attunement to the popular intersubjective perception?