Comment by antonvs

6 days ago

> being unable to verify it possesses consciousness

One of the strange properties of consciousness is that an entity with consciousness can generally feel pretty confident in believing they have it. (Whether they're justified in that belief is another question - see eliminativism.)

I'd expect a conscious machine to find itself in a similar position: it would "know" it was conscious because of its experiences, but it wouldn't be able to prove that to anyone else.

Descartes' "Cogito, ergo sum" refers to this. He used "cogito" (thought) to "include everything that is within us in such a way that we are immediately aware [conscii] of it." A translation into a more modern (philosophical) context might say something more like "I have conscious awareness, therefore I am."

I'm not sure what implications this might have for a conscious machine. Its perspective on human value might come from something other than belief in human consciousness - for example, our negative impact on the environment. (There have was that recent case where an LLM generated text describing a willingness to kill interfering humans.)

In a best case scenario, it might conclude that all consciousness is valuable, including humans, but since humans haven't collectively reached that conclusion, it's not clear that a machine trained on human data would.