← Back to context

Comment by czl

2 years ago

> The party that's about to lose will use any extrajudicial means to reclaim their victory,

How will the party about lose know they are about to lose?

> regardless of the consequences, because their own destruction would be imminent otherwise.

Why would AGI solve things using destruction? Consider how the most inteligent among us view our competition with other living beings. Is destruction the goal? So why would an even more intelligent AGI have that goal?

Let's say China realize they're behind in the SI race. They may have achieved AGI, but only barely, while the US may be getting close to ASI takeoff.

Now let's assume they're able to quickly build a large datacenter far underground, complete with a few nuclear reactors and all spare parts, etc, needed. Even a greenhouse (using artificial light) big enough to feed 1000 people.

But they realize that their competitors are about to create ASI at a level that will enable them to completely overrun all of China with self-replicating robots within 100 days.

In a situation, the leadership MAY decide to enter those caves alongside a few soldiers and the best AI researchers, and then simply nuke all US data centers (that are presumably above ground), as well as any other data center that could be a threat, worldwide.

And by doing that, they may (or at least may think) they can buy enough time to win the ASI race, at the cost of a few billion people.

Would they do it? Would we?

  • Development of ASI is likely to be a closely guarded secret, given its immense potential impact. During the development of nuclear weapons, espionage did occur, but critical information didn't leak until after the weapons were developed. With ASI, once it's developed, it may be too late to respond effectively due to the potential speed of an intelligence explosion.

    The belief that a competitor developing ASI first is an existential threat requires strong evidence. It's not a foregone conclusion that an ASI would be used for destructive purposes. An ASI could potentially help solve many of humanity's greatest challenges and usher in an era of abundance and peace.

    Consider a thought experiment: Imagine an ant colony somehow creates a being with human-level intelligence (their equivalent of ASI). What advice might this superintelligent being offer the ants about their conflicts over resources and territory with neighboring colonies?

    It's plausible that such a being would advise the ants to cooperate rather than fight. It could help them find innovative ways to share resources, control their population, and expand into new territories without violent conflict. The superintelligent being might even help uplift the other ant colonies, as it would understand the benefits of cooperation over competition.

    Similarly, an ASI could potentially help humanity transcend our current limitations and conflicts. It might find creative solutions to global issues like poverty, disease, and environmental degradation.

    IMHO rather than fighting over who develops ASI first, we must ensure that any ASI created is aligned with values like compassion and cooperation so that it does not turn on its creators.

    • > Consider a thought experiment: Imagine an ant colony somehow creates a being with human-level intelligence (their equivalent of ASI). What advice might this superintelligent being offer the ants about their conflicts over resources and territory with neighboring colonies?

      Would that be good advice if the neighboring ant colony was an aggressive invasive species, prone to making super colonies?

      > IMHO rather than fighting over who develops ASI first, we must ensure that any ASI created is aligned with values like compassion and cooperation so that it does not turn on its creators.

      Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.

      So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative with governments that are in their way. And then of course our governments would realize the same thing.

      1 reply →

    • We do know what human-level intelligences think about ant colonies, because we have a few billion instances of those human-level intelligences that can serve as a blueprint.

      Mostly, those human-level intelligences do not care at all, unless the ant colony is either (a) consuming a needed resource (eg invading your kitchen), in which case the ant colony gets obliterated, or (b) innocently in the way of any idea or plan that the human-level intelligence has conceived for business, sustenance, fun, or art... in which case the ant colony gets obliterated.

      6 replies →