Comment by mitthrowaway2
5 days ago
These aren't mutually exclusive. Even in The Terminator, Skynet's method of choice is nuclear war. Yudkowsky frequency expressses concern that a malevolent AI might synthesize a bioweapon. I personally worry that destroying the ozone layer might be an easy opening volley. Either way, I don't want a really smart computer spending its time figuring out plans to end the human species, because I think there are too many ways to be successful.
Terminator descends from a tradition of science fiction cold war parables. Even in Terminator 2 there's a line suggesting the movie isn't really about robots:
John:We're not gonna make it, are we? People, I mean.
Terminator: It's in your nature to destroy yourselves.
Seems odd to worry about computers shooting the ozone when there's plenty of real existential threats loaded in missles aimed at you right now.
I'm not in any way discounting the danger represented by those missiles. In fact I think AI only makes it more likely that they might someday be launched. But I will say that in my experience the error-condition that causes a system to fail is usually the one that didn't seem likely to happen, because the more obvious failure modes were taken seriously from the beginning. Is it so unusual to be able to consider more than one risk at a time?