← Back to context

Comment by Zarathruster

3 hours ago

Ah ok, gotcha.

> When you said, "consciousness can't be instantiated purely in language", I took you to mean human language

No, I definitely meant the statement to apply to any kind of language, but it seems clear that I sacrificed clarity for the sake of brevity. You're not the only one who read it that way, but yeah, we're in agreement on the substance.

I think I'm still a bit confused... so, in the languages which cannot produce understanding and consciousness, you mean to include "machine language"? (And thus, any computer language which can be compiled to machine language?)

On your interpretation, are there any sorts of computation that Searle believes would potentially allow consciousness?

ETA: The other issue I have is with this whole idea is that "understanding requires semantics, and semantics requires consciousness". If you want to say that LLMs don't "understand" in that sense, because they're not conscious, I'm fine as long as you limit it to technical philosophical jargon. In plain English, in a practical sense, it's obvious to me that LLMs understand quite a lot -- at least, I haven't found a better word to describe LLMs' relationship with concepts.

  • > I think I'm still a bit confused... so, in the languages which cannot produce understanding and consciousness, you mean to include "machine language"? (And thus, any computer language which can be compiled to machine language?)

    It's... a little more complicated but basically yes. Language, by its nature, is indexical: it has no meaning without someone to observe it and ascribe meaning to it. Consciousness, on the other hand, requires no observer beyond the person experiencing it. If you have it, it's as real and undeniable as a rock or a tree or a mountain.

    > On your interpretation, are there any sorts of computation that Searle believes would potentially allow consciousness?

    I'm pretty sure (but not 100%) that the answer is "no"

    > ETA: The other issue I have is with this whole idea is that "understanding requires semantics, and semantics requires consciousness". If you want to say that LLMs don't "understand" in that sense, because they're not conscious, I'm fine as long as you limit it to technical philosophical jargon.

    Sure, if you want to think of it that way. If you accept the premise that LLMs aren't conscious, then you can consign the whole discussion to the "technical philosophical jargon" heap, forget about it, and happily go about your day. On the other hand, if you think they might be conscious, and consider the possibility that we're inflicting immeasurable suffering on sapient being that would rightly be treated with kindness (and afforded some measure of rights), then we're no longer debating how many angels can dance on the head of a pin. That's a big, big "if" though.