Comment by godelski
4 days ago
> How lucky do you feel?
I don't gamble. But I am confident P(doom) is quite low.
Despite that, I do take AI safety quite seriously and literally work on the fundamental architectures of these things. You don't need P(doom) to be high for you to take doom seriously. The probability isn't that consequential when we consider such great costs. All that matters is the probability is not approximately zero.
But all you P(doom)-ers just make this work harder to do and make it harder to improve those systems and make them safer. It just furthers people like Altman who are pushing a complementary agenda and who recognize that you cannot stop the development of AI. In fact, the more you press this doom story the more you make it impossible. What the story of doom (as well as story of immense wealth) pushes is a need to rush.
If you want to really understand this, go read about nuclear deterrence. I don't mean go watch some youtube video or some Less Wrong article. I mean go grab a few books. Read both sides of the arguments. But as it stands, this is how the military ultimately thinks and that effectively makes it true. You don't launch nukes because your enemy will too. You also don't say what that red line is because then you can still use it as a bargaining chip. If you state that line, your enemy will just walk up to it and do everything before it.
So what about AI? The story being sold is that this enables a weapon of mass destruction. Take US and China. China has to make AI because the US makes AI and if the US makes AI first they can't risk that the US won't use it to take out all their nukes or ruin their economy. They can't take that risk even if the probability is low. But the same is true in reverse. So the US can't stop because China won't and if China gets there first they could destroy the US. You see the trap?[0] Now here's the fucking kicker. Suppose you believe your enemy is close to building that AI weapon. Does that cross your red line in which you will use nukes?
So you doomers are creating a self-fulfilling prophecy, in a way. Ironically this is highly relevant to the real dangers of AI systems. The current (and still future) danger comes from outsourcing intelligence and decision making to these machines. Ironically this becomes less problematic once we actually create machines with intelligence (intelligence like humans or animals, not like automated reasoning (a technology we've had since the 60's)).
You want to reduce the risk of doom? Here's what you do. You convince both sides that instead of competing, they pursue development together. Hand in hand. Openly. No one gets AI first. Secret AI programs? Considered an act of aggression. Yes, this still builds AI but it dramatically reduces the risk of danger. You don't need to rush or cut corners because you are worried about your enemy getting a weapon first and destroying you. You get the "weapon" simultaneously, along with everyone else on the planet. It's not a great solution because you still end up with "nuclear weapons" (analogously), but if everyone gets it at the same time then you end up in a situation like we have been for the last few decades (regardless of the cause, it is an abnormally peaceful time in human history) where MAD policies are in effect[1].
I don't think it'll happen, everyone will say "I would, but they won't" and end up failing without trying. But ultimately this is a better strategy than getting people to stop. You're not going to be successful in stopping. It just won't happen. P(doom) exists in this scenario even without the development of AGI. As long as that notion of doom exists, there is incentives to rush and cut corners. People like Altman will continue to push that message and say that they are the only ones who can do it safely and do it fast (which is why they love the "Scale is All You Need" story). So if you are afraid I don't think you're afraid enough. There's a lot of doom that exists before AGI. You don't need AGI or ASI for the paperclip scenario. Such an AI doesn't even require real thinking[2].
The reason doomers make work like mine harder is because researchers like me care about the nuances and subtleties. We care about understanding how the systems work. But as long as a looming threat is on the line people will argue that we have no time to study the details or find out how these things work. You cannot make these things safe without understanding how they work (to a sufficient degree at least). And frankly, it isn't just doomers, it is also people rushing to make the next AI product. It doesn't matter if ignoring those details and nuances is self-sabotaging. The main assumption under my suggestion is that when people rush they tend to make more mistakes. It's not guaranteed that people make mistakes, but there sure is a tendency for that to happen. After all, we're only human.
You ask how lucky I feel? I'll ask you how confident a bunch of people racing to create something won't make mistakes. Won't make disastrous mistakes. This isn't just a game between US and China, there are a lot more countries involved. You think all of them can race like this and not make a major mistake? A mistake which causes P(doom)? Me? I sure don't feel lucky about that one.
[0] It sounds silly, but this is how Project Stargate happened. No, not the current one that ironically shares the same name, the one in the 70's where they studied psychic powers. It started because a tabloid published that Russians were doing it, so the US started research in response, which caused the Russians to actually research psychic phenomena.
[1] Not to mention that if this happened it would be a unique act of unity that we've never seen in human history. And hey, if you really want to convince AI, Aliens, or whatever that we can be peaceful, here's the chance.
[2] As Melanie Mitchell likes to point out, an AGI wouldn't have this problem because if you have general intelligence you understand that humans won't sacrifice their own lives to make more paperclips. Who then would even use them? So the paperclip scenario is a danger of a sophisticated automata rather than of intelligence.
Thank you for the thoughtful response. At the first read I was like everything looks reasonably correct. However you present the doom argument as being dividing and causing the race, when in fact is probably the only argument for cooperation and slowing the race.