← Back to context

Comment by philwelch

2 years ago

> i have the same problem with this as i did the original version of anthropic - you cannot achieve "unilateral ai safety". either the danger is real and humanity is not safe until all labs are locked down, or the danger is not real and theres nothing really to discuss.

It’s not clear to me that this would be true. Remember that there’s a lot of vagueness about the specific AI risk model, aside from the general idea that the AI will outsmart everyone and take over the world and then do something that might not be good for us with all that power.

How do you shut down all the AI labs? You can’t; some of them are in different countries that have nuclear weapons. But maybe you can defeat an evil AI with a good AI.

I’m not sure good and evil are well enough defined to hope for this. A machine super intelligence would be unknowable to us. Gorillas vs Homo sapiens.

  • From the gorilla’s perspective, humans who preserve gorilla habitats are preferable to humans who hunt gorillas for sport. But gorillas have tribes of their own[1] and if gorillas could invent humans, some of them might invent humans who hunt the enemy tribe while helping theirs. So if you were a gorilla from the rival tribe and you had nuanced ethics of your own, you might even try to invent humans that try to stop the enemy humans without necessarily exterminating the enemy tribe.

    [1] I don’t know if gorillas specifically have been observed engaging in tribal warfare, but chimpanzees have; the Gombe Chimpanzee War for instance.