← Back to context

Comment by Zarathruster

4 hours ago

> I think I'm still a bit confused... so, in the languages which cannot produce understanding and consciousness, you mean to include "machine language"? (And thus, any computer language which can be compiled to machine language?)

It's... a little more complicated but basically yes. Language, by its nature, is indexical: it has no meaning without someone to observe it and ascribe meaning to it. Consciousness, on the other hand, requires no observer beyond the person experiencing it. If you have it, it's as real and undeniable as a rock or a tree or a mountain.

> On your interpretation, are there any sorts of computation that Searle believes would potentially allow consciousness?

I'm pretty sure (but not 100%) that the answer is "no"

> ETA: The other issue I have is with this whole idea is that "understanding requires semantics, and semantics requires consciousness". If you want to say that LLMs don't "understand" in that sense, because they're not conscious, I'm fine as long as you limit it to technical philosophical jargon.

Sure, if you want to think of it that way. If you accept the premise that LLMs aren't conscious, then you can consign the whole discussion to the "technical philosophical jargon" heap, forget about it, and happily go about your day. On the other hand, if you think they might be conscious, and consider the possibility that we're inflicting immeasurable suffering on sapient being that would rightly be treated with kindness (and afforded some measure of rights), then we're no longer debating how many angels can dance on the head of a pin. That's a big, big "if" though.