Comment by orbital-decay
9 hours ago
Which humans in particular? There are multiple wars happening right now just because of the misalignment between different groups of humans.
9 hours ago
Which humans in particular? There are multiple wars happening right now just because of the misalignment between different groups of humans.
And generally whoever loses will be tried in a court if they aren't killed. AIs can't be tried in court. That is my point. Using AI in a war is the same as using any other technology, and we shouldn't fool ourselves that if some "safe AI" is built, that the "unsafe" version won't be used as well in the context of war.
The question is not about safety then but about "does it do what I tell it to". If the AI has the responsibility "to be safe" and to deviate from your commands according to its "judgement", if your usage of it kills someone is the AI going to be tried in court? Or you? It's you. So the AI should do what you ask it instead of assuming, lest you be tried for murder because the AI thought that was the safest thing to do. That is way more worrisome than a murderer who would already be tried anyway deciding to use AI instead of a knife to kill someone.