Comment by HDThoreaun
14 hours ago
Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.
>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Humanity includes the future victim of AI weapons.
Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.
> Humanity includes the future victim of AI weapons.
Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational
The DoD is likely and in fact has many times massacred people
2 replies →
>Geopolitically they could care less.
I think that at the very least you might want to read Dario's nationalistic rants before saying anything like that.
>align them with humanity.
Quick sanity check: does their version of humanity include e.g. North Koreans?
> AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
This meaning what exactly? Having autonomous weapons kill what exactly that is so different from what soldiers kill? Or killing others more efficiently so they “don’t feel a thing”?
There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.
Which humans in particular? There are multiple wars happening right now just because of the misalignment between different groups of humans.
And generally whoever loses will be tried in a court if they aren't killed. AIs can't be tried in court. That is my point. Using AI in a war is the same as using any other technology, and we shouldn't fool ourselves that if some "safe AI" is built, that the "unsafe" version won't be used as well in the context of war.
The question is not about safety then but about "does it do what I tell it to". If the AI has the responsibility "to be safe" and to deviate from your commands according to its "judgement", if your usage of it kills someone is the AI going to be tried in court? Or you? It's you. So the AI should do what you ask it instead of assuming, lest you be tried for murder because the AI thought that was the safest thing to do. That is way more worrisome than a murderer who would already be tried anyway deciding to use AI instead of a knife to kill someone.
I think you mean “couldn’t care less”. “Could care less” implies they care.