← Back to context

Comment by yes_man

20 hours ago

Doesn’t the LLM experience discrete continuity every time it infers the next token?

> I think consciousness is not an abstract property in the world, therefore it’s tied to certain types of entities. Therefore an AI is not going to be “conscious”

This pretty much sums up most arguments for why LLMs aren’t conscious: ”I think” followed by assertions. Only real argument is: science doesn’t quantify consciousness, we cannot quantify consciousness, let’s not assign so much certainty to models clearly exhibiting intelligence not being conscious in some way, to some degree.

I don't think you really understood my point, because you didn't reply to it at all.

I am making a linguistic argument. AI may get as sophisticated as "traditional" consciousness. But this is only "real" consciousness if you are a functionalist and think the output is all that matters.

I disagree and think that "flying" is just a weak generic word that describes both planes and birds, and not some kind of ultimate Platonic Ideal in the world.

Ditto for AI consciousness: it may develop to be as complex as traditional animal consciousness, but I'm not a functionalist, and think it's merely a lack of our sophisticated language that makes us think it's the same thing. It's not. Planes PlaneFly through the air, while birds BirdFly.

  • I see it as LLMs, AI, whatever, can be intelligent enough to emulate consciousness, appear outside as if it were. But that is not proof it really has a qualia, an experience of existing.

    All I am saying we should stop being so certain they are not conscious, since we lack a solid, quantifiable model for consciousness.