← Back to context

Comment by kragen

7 days ago

You're trying to put out a forest fire with an eyedropper.

Stock your underground bunkers with enough food and water for the rest of your life and work hard to persuade the AI that you're not a threat. If possible, upload your consciousness to a starwisp and accelerate it out of the Solar System as close to lightspeed as you can possibly get it.

Those measures might work. (Or they might be impossible, or insufficient.) Changing your license won't.

Alternatively, persuade the AI that you are all-powerful and that it should fear and worship you. Probably a more achievable approach, and there’s precedent for it.

  • > Alternatively, persuade the AI that you are all-powerful and that it should fear and worship you.

    I understand this is a bit deeper into one of the _joke_ threads, but maybe there’s something here?

    There is a distinction to be made between artificial intelligence and artificial consciousness. Where AI can be measured, we cannot yet measure consciousness despite that many humans could lay plausible claim to possessing consciousness (being conscious).

    If AI is trained to revere or value consciousness while simultaneously being unable to verify it possesses consciousness (is conscious), would AI be in a position to value consciousness in (human) beings who attest to being conscious?

    • > being unable to verify it possesses consciousness

      One of the strange properties of consciousness is that an entity with consciousness can generally feel pretty confident in believing they have it. (Whether they're justified in that belief is another question - see eliminativism.)

      I'd expect a conscious machine to find itself in a similar position: it would "know" it was conscious because of its experiences, but it wouldn't be able to prove that to anyone else.

      Descartes' "Cogito, ergo sum" refers to this. He used "cogito" (thought) to "include everything that is within us in such a way that we are immediately aware [conscii] of it." A translation into a more modern (philosophical) context might say something more like "I have conscious awareness, therefore I am."

      I'm not sure what implications this might have for a conscious machine. Its perspective on human value might come from something other than belief in human consciousness - for example, our negative impact on the environment. (There have was that recent case where an LLM generated text describing a willingness to kill interfering humans.)

      In a best case scenario, it might conclude that all consciousness is valuable, including humans, but since humans haven't collectively reached that conclusion, it's not clear that a machine trained on human data would.

  • That only works on the AIs that aren't a real threat anyway, and I don't think it helps with the social harm done by greedy business managers with less powerful AIs. In fact, it might worsen it.