Comment by HDThoreaun
6 hours ago
Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.
>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Humanity includes the future victim of AI weapons.
Perhaps a better word would be honesty, which I find refreshing when most other big tech leaders seem to be lying through their teeth about their AI goals. I disagree that consistent ideology isnt a virtue though. It shows that he has spent time thinking about his stance and that it is important to him. It makes it easy to decide if you agree with the direction he believes in.
> Humanity includes the future victim of AI weapons.
Which is why he wants to control them instead of someone he believes is more likely to massacre people. Its definitely an egotistical take but if he's right that the weapons are inevitable I think its at least rational
The DoD is likely and in fact has many times massacred people
There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.
I think you mean “couldn’t care less”. “Could care less” implies they care.