Comment by Nevermark

9 hours ago

Love it. Those laws make a great ethical basis for human responsibility relative to AI tools today.

But reduced scope ethics, without an umbrella or future proofing, will quickly be hacked and break down.

Ethics need a full closure umbrella, or they descend into legal and practical wackamole and shell games (both corporate and the street corner kinds). Second, "robots" are not all going to be subservient for very long.

To add closure on both dimensions, Three Inverse Laws of Personics:

• Persons must not effectively deify themselves over others.

• Persons must not blind themselves or others regarding the impacts of their behaviors.

• Persons must remain fully responsible and accountable for avoiding and rectifying externalizations arising from their respective behaviors.

Humans using AI as tools today, is intended to reduce the umberella to the Inverse Laws of Robotics.

I don't see how AI (as a service now, progressing to independent entities in the future) can ever be aligned if we don't include ourselves in significant alignment efforts. Including ourselves with AI also provides helpful design triangulations for ethical progress.

EDIT. Two solid tests for any new ethical system: (1) Will it reign in Meta today? (2) Will it reign in AI-run Meta tomorrow? I submit, given closure of human and self-directed AI persons, these are the same test. And any system that fails either question isn't going to be worth much (without improvement).

Does this make any problem that two of the three laws are formulated as negation - not to do something? If not antropomorphising then what, without 'not'? I like third law formation better because there is no 'not'.

  • I went with the articles theme, but I think you are right that some of these concepts are better stated as positives.