← Back to context

Comment by layer8

5 days ago

Thinking and consciousness don’t by themselves imply emotion and sentience (feeling something), and therefore the ability to suffer. It isn’t clear at all that the latter is a thing outside of the context of a biological brain’s biochemistry. It also isn’t clear at all that thinking or consciousness would somehow require that the condition of the automaton that performs these functions would need to be meaningful to the automaton itself (i.e., that the automaton would care about its own condition).

We are not anywhere close to understanding these things. As our understanding improves, our ethics will likely evolve along with that.

>Thinking and consciousness don’t by themselves imply emotion and sentience...

Sure, but all the examples of conscious and/or thinking beings that we know of have, at the very least, the capacity to suffer. If one is disposed to take these claims of consciousness and thinking seriously, then it follows that AI research should, at minimum, be more closely regulated until further evidence can be discovered one way or the other. Because the price of being wrong is very, very high.

  • Emotions and suffering are "just" necessary feedback for the system to evaluate it's internal and external situation. It's similar to how modern machines have sensors. But nobody would say a PC is suffering and enslaved, just because the CPU is too hot or the storage is full.

    It's probably the sentience-part which makes it harmful for the mind.

  • Probably because those examples arose in an environment with harm, the Earth, and thus had incent to evolve the capacity to suffer. There is no such case for AI today and creating a Pascal's wager for such minimization is not credible with what we know about them.

    • "Wow, adding this input that the AI reports as "unpleasant" substantially improves adherence! Let's iterate on this"