Comment by JKCalhoun
4 days ago
> Many of them also expect that, without heroic effort, AGI development will lead to human extinction.
Odd to me. Not biological warfare? Global warming? All-out nuclear war?
I guess The Terminator was a formative experience for them. (For me perhaps it was The Andromeda Strain.)
It makes a lot of sense when you realize that for many of the “leaders” in this community like Yudkowsky, their understanding of science (what it is, how it works, and its potential) comes entirely from reading science fiction and playing video games.
Sad because Eli’s dad was actually a real and well-credentialed researcher at Bell Labs. Too bad he let his son quit school at an early age to be an autodidact.
I'm not at all a rationalist or a defender, but big yud has an epistemology that takes the form of the rationalist sacred text mentioned in the article (the sequences). A lot of it is well thought out, and probably can't be discarded as just coming from science fiction and video games. Yud has a great 4 hour talk with Stephen Wolfram where he holds his own.
Holding one’s own against Stephen Wolfram isn’t exactly the endorsement it might seem.
1 reply →
These aren't mutually exclusive. Even in The Terminator, Skynet's method of choice is nuclear war. Yudkowsky frequency expressses concern that a malevolent AI might synthesize a bioweapon. I personally worry that destroying the ozone layer might be an easy opening volley. Either way, I don't want a really smart computer spending its time figuring out plans to end the human species, because I think there are too many ways to be successful.
Terminator descends from a tradition of science fiction cold war parables. Even in Terminator 2 there's a line suggesting the movie isn't really about robots:
John:We're not gonna make it, are we? People, I mean.
Terminator: It's in your nature to destroy yourselves.
Seems odd to worry about computers shooting the ozone when there's plenty of real existential threats loaded in missles aimed at you right now.
I'm not in any way discounting the danger represented by those missiles. In fact I think AI only makes it more likely that they might someday be launched. But I will say that in my experience the error-condition that causes a system to fail is usually the one that didn't seem likely to happen, because the more obvious failure modes were taken seriously from the beginning. Is it so unusual to be able to consider more than one risk at a time?
Most in the community consider nuclear and biological threats to be dire. Many just consider existential threats from AI to be even more probable and damaging.
Yes, sufficiently high intelligence is sometimes assumed to allow for rapid advances in many scientific areas. So, it could be biological warfare because AGI. Or nanotech, drone warfare, or something stranger.
I'm a little skeptical (there may be bottlenecks that can't be solved by thinking harder), but I don't see how it can be ruled out.
Check out "the precipice" by Tony Ord. Biological warfare and global warming are unlikely to lead to total human extinction (though both present large risks of massive harm).
Part of the argument is that we've had nuclear weapons for a long time but no apocalypse so the annual risk can't be larger than 1%, whereas we've never created AI so it might be substantially larger. Not a rock solid argument obviously, but we're dealing with a lot of unknowns.
A better argument is that most of those other risks are not neglected, plenty of smart people working against nuclear war. Whereas (up until a few years ago) very few people considered AI a real threat, so the marginal benefit of a new person working on it should be bigger.
That's what was so strange with EA and rationalist movements. A highly theoretical model that AGI could wipe us all out vs the very real issue of global warming and pretty much all emphasis was on AGI.
Agi is a lot more fun to worry about and asks a lot less of you. Sort of like advocating for the "unborn" vs veterans/homeless/addicts.
My interpretation: When they say "will lead to human extinction", they are trying to vocalize their existential terror that an AGI would render them and their fellow rationalist cultists permanently irrelevant - by being obviously superior to them, by the only metric that really matters to them.
You sound like you wouldn't feel existential terror if after typing "My interpretation: " into the text field you'd see the rest of your message suggested by Copilot exactly how you wrote it letter by letter. And the same in every other conversation. How about people interrupting you in "real" life interaction after an AI predicted your whole tirade for them and they read it faster than you said it, and also read an analysis of it?
Dystopian sci-fi for sure, but many people dismissing LLMs as not AGI do so because LLMs are just "token predictors".
Scroll up and read the comment by JKCalhoun, for the context of my prior comment.
Or: I'm decades too old to have grown up in the modern "failure to get attention online = death" social media dystopia. A dozen lines of shell script could pretty well predict the meals I eat in a day. Or how many times I get up to pee in the night. Neither of those facts bother me.
And if I want some "the world is better for my being here" feedback - there are a dozen or more local non-profits, churches, and frail old friends/neighbors who would would happily welcome my assistance or visit on any given day.
I mean, this is the religion/philosophy which produced Roko's Basilisk (and not one of their weird offshoot murder-cults, either, it showed up on LessWrong, and was taken at least somewhat seriously by people there, to the point that Yudkowsky censored it. Their beliefs about AI are... out there.
> and was taken at least somewhat seriously by people there, to the point that Yudkowsky censored it.
Roko isn't taken seriously. What was taken seriously is ~ "if you've had an idea that you yourself think will harm people to even know about it, don't share it".