Comment by czl

2 years ago

Development of ASI is likely to be a closely guarded secret, given its immense potential impact. During the development of nuclear weapons, espionage did occur, but critical information didn't leak until after the weapons were developed. With ASI, once it's developed, it may be too late to respond effectively due to the potential speed of an intelligence explosion.

The belief that a competitor developing ASI first is an existential threat requires strong evidence. It's not a foregone conclusion that an ASI would be used for destructive purposes. An ASI could potentially help solve many of humanity's greatest challenges and usher in an era of abundance and peace.

Consider a thought experiment: Imagine an ant colony somehow creates a being with human-level intelligence (their equivalent of ASI). What advice might this superintelligent being offer the ants about their conflicts over resources and territory with neighboring colonies?

It's plausible that such a being would advise the ants to cooperate rather than fight. It could help them find innovative ways to share resources, control their population, and expand into new territories without violent conflict. The superintelligent being might even help uplift the other ant colonies, as it would understand the benefits of cooperation over competition.

Similarly, an ASI could potentially help humanity transcend our current limitations and conflicts. It might find creative solutions to global issues like poverty, disease, and environmental degradation.

IMHO rather than fighting over who develops ASI first, we must ensure that any ASI created is aligned with values like compassion and cooperation so that it does not turn on its creators.

> Consider a thought experiment: Imagine an ant colony somehow creates a being with human-level intelligence (their equivalent of ASI). What advice might this superintelligent being offer the ants about their conflicts over resources and territory with neighboring colonies?

Would that be good advice if the neighboring ant colony was an aggressive invasive species, prone to making super colonies?

> IMHO rather than fighting over who develops ASI first, we must ensure that any ASI created is aligned with values like compassion and cooperation so that it does not turn on its creators.

Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.

So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative with governments that are in their way. And then of course our governments would realize the same thing.

  • > Would that be good advice if the neighboring ant colony was an aggressive invasive species, prone to making super colonies?

    If the ASI is aligned for compassion and cooperation it may convince and assist the two colonies to merge to combine their best attributes (addressing DNA compatibility) and it may help them with resources that are needed and perhaps offer birth control solutions to help them escape the malthusian trap.

    > Similarly, I'm wondering how compassion and cooperation would work in Ukraine or Gaza, given the nature of those conflicts. The AI could advise us, but it's not like we haven't come up with that same advice before over the ages.

    An ASI aligned for compassion and cooperation could:

    1 Provide unbiased, comprehensive analysis of the situation (An odds calculator that is biased about your chances to win is not useful and even if it has such faults an ASI being ASI would by definition transcend biases)

    2 Forecast long-term consequences of various actions (if ASI judges chance to win is 2% do you declare war vs seek peace?)

    3 Suggest innovative solutions that humans might not conceive

    4 Mediate negotiations more effectively

    An ASI will have better answers than these but that's a start.

    > So then you have to ask what motivation bad actors would have to align their ASIs to be compassionate and cooperative

    Developing ASI likely requires vast amounts of cooperation among individuals, organizations, and possibly nations. Truly malicious actors may struggle to achieve the necessary level of collaboration. If entities traditionally considered "bad actors" manage to cooperate extensively, it may call into question whether they are truly malicious or if their goals have evolved. And self-interested actors , if they are smart enough to create ASI, should recognize that an unaligned ASI poses existential risks to themselves.

We do know what human-level intelligences think about ant colonies, because we have a few billion instances of those human-level intelligences that can serve as a blueprint.

Mostly, those human-level intelligences do not care at all, unless the ant colony is either (a) consuming a needed resource (eg invading your kitchen), in which case the ant colony gets obliterated, or (b) innocently in the way of any idea or plan that the human-level intelligence has conceived for business, sustenance, fun, or art... in which case the ant colony gets obliterated.

  • Actually many humans (particularly intelligent humans) do care about and appreciate ants and other insects. Plenty of people go out of their way not to harm ants, find them fascinating to observe, or even study them professionally as entomologists. Human attitudes span a spectrum.

    Notice also the key driver of human behavior towards ants is indifference, not active malice. When ants are obliterated, it's usually because we're focused on our own goals and aren't paying attention to them, not because we bear them ill will. An ASI would have far greater cognitive resources to be aware of humans and factor us into its plans.

    Also humans and ants lack any ability to communicate or have a relationship. But humans could potentially communicate with an ASI and reach some form of understanding. ASI might come to see humans as more than just ants.

    • > Plenty of people go out of their way not to harm ants

      Yes... I do that. But our family home was still built on ant-rich land and billions of the little critters had to make way for it.

      It doesn't matter if you build billions of ASI who have "your and my" attitude towards the ants, as long as there exists one indifferent powerful enough ASI that needs the land.

      > An ASI would have far greater cognitive resources to be aware of humans and factor us into its plans.

      Well yes. If you're a smart enough AI, you can easily tell that humans (who have collectively consumed too much sci-fi about unplugging AIs) are a hindrance to your plans, and an existential risk. Therefore they should be taken out because keeping them has infinite negative value.

      > But humans could potentially communicate with an ASI and reach some form of understanding.

      This seems undily anthropomorphizing. I can also communicate with ants by spraying their pheromones, putting food on their path, etc. This is a good enough analogy to how much a sufficiently intelligent entity would need to "dumb down" their communication to communicate with us.

      Again, for what purpose? For what purpose do you need a relationship with ants, right now, aside from curiosity and general goodwill towards the biosphere's status quo?

      3 replies →