Comment by gumgumpost
17 hours ago
>we always used the turing test as a yardstick for consciousness
Yet that cannot compel reality. How we define something is the measure of chance we get it right.
>Now thats its been achieved, what is the rationale for moving the goalposts?
Absolutely, if we understand it's not good enough. First of all we cannot know something is or isn't conscious. You cannot prove I am, and I cannot prove you are. We simply assume, but the scientific argument would be that we both work on the same principles, have similar brains, signals do something. If we alter those signals in certain ways we both manifest in similar ways, and it's expected to some degree since the brains work in similar ways.
So based on this it's somewhat comfortable making the jump in assuming other humans but you have what you have, as consciousness. But that doesn't mean you can gauge consciousness in something that is not coming from a human brain.
Funnily enough, if we knew how, we'd be able to make an AI that would do it better than us, an AI that would gauge consciousness in other things, better than a human could. No argument so far why a conscious individual is required to "see" consciousness in other things.
So the closest to certainty we could ever have is on something that is working like a human brain, with delays and timings and all. And considering the amount of activity, the type of activity, and the von Neumann memory bottleneck in our current computing hardware, I seriously doubt there's anything like mammalian consciousness in GPUs.
You can argue about "consciousness" in GPUs as much as you can argue about consciousness in a rock. It could be, some kind, but who knows? Way too abstract to call it out, in a scientific sense.
What I am trying to say is that we can only agree something is conscious, and only if it's working on the same principles a human brain does, closely. It's an agreement, not proof, not definitions. We collectively start accepting it, without KNOWING. And the safest way to do that is on something which is working exactly like a human brain. Anything else we can only lose certainty.
We can collectively decide tomorrow that rocks are conscious, but that means nothing. But the certainty we'd have would be so so way lower than that of any other human being conscious like us.
And the whole confusion will compound when again, unknowingly, people will start advocating to never turn LLMs off because that's the equivalent of "killing" them each time, which I think will be peak nonsense.
Now a question for you: Let's suppose someone is born, and has zero sensory input all of their lives. They live in a hospital bed for 20 years. Zero information input, of any kind. What is going on in there? Is there someone home? Are they having a conscious experience? How do you know if yes or no? How can we divorce consciousness from experience (data flow)?
> What I am trying to say is that we can only agree something is conscious, and only if it's working on the same principles a human brain does, closely. It's an agreement, not proof, not definitions. We collectively start accepting it, without KNOWING. And the safest way to do that is on something which is working exactly like a human brain. Anything else we can only lose certainty.
This means that "consciousness" is simply a synonym for "human".
By that "agreement", sure, a machine cannot be conscious. But I don't think this is what most people mean when they talk about whether an LLM could be conscious. Because of course it's not human. So they must be asking something more interesting.