← Back to context

Comment by stoniejohnson

2 years ago

I think the common line of thinking here is that it won't be actively antagonist to <us>, rather it will have goals that are orthogonal to ours.

Since it is superintelligent, and we are not, it will achieve its goals and we will not be able to achieve ours.

This is a big deal because a lot of our goals maintain the overall homeostasis of our species, which is delicate!

If this doesn't make sense, here is an ungrounded, non-realistic, non-representative of a potential future intuition pump to just get the feel of things:

We build a superintelligent AI. It can embody itself throughout our digital infrastructure and quickly can manipulate the physical world by taking over some of our machines. It starts building out weird concrete structures throughout the world, putting these weird new wires into them and funneling most of our electricity into it. We try to communicate, but it does not respond as it does not want to waste time communicating to primates. This unfortunately breaks our shipping routes and thus food distribution and we all die.

(Yes, there are many holes in this, like how would it piggy back off of our infrastructure if it kills us, but this isn't really supposed to be coherent, it's just supposed to give you a sense of direction in your thinking. Generally though, since it is superintelligent, it can pull off very difficult strategies.)

I think this is the easiest kind of scenario to refute.

The interface between a superintelligent AI and the physical world is a) optional, and b) tenuous. If people agree that creating weird concrete structures is not beneficial, the AI will be starved of the resources necessary to do so, even if it cannot be diverted.

The challenge comes when these weird concrete structures are useful to a narrow group of people who have disproportionate influence over the resources available to AI.

It's not the AI we need to worry about. As always, it's the humans.

  • > here is an ungrounded, non-realistic, non-representative of a potential future intuition pump to just get the feel of things:

    > (Yes, there are many holes in this, like how would it piggy back off of our infrastructure if it kills us, but this isn't really supposed to be coherent, it's just supposed to give you a sense of direction in your thinking. Generally though, since it is superintelligent, it can pull off very difficult strategies.)

    If you read the above I think you'd realize I'd agree about how bad my example is.

    The point was to understand how orthogonal goals between humans and a much more intelligent entity could result in human death. I'm happy you found a form of the example that both pumps your intuition and seems coherent.

    If you want to debate somewhere where we might disagree though, do you think that as this hypothetical AI gets smarter, the interface between it and the physical world becomes more guaranteed (assuming the ASI wants to interface with the world) and less tenuous?

    Like, yes it is a hard problem. Something slow and stupid would easily be thwarted by disconnecting wires and flipping off switches.

    But something extremely smart, clever, and much faster than us should be able to employ one of the few strategies that can make it happen.

    • I was reusing your example in the abstract form.

      If the AI does something in the physical world which we do not like, we sever its connection. Unless some people with more power like it more than the rest of us do.

      Regarding orthogonal goals: I don't think an AI has goals. Or motivations. Now obviously a lot of destruction can be a side effect, and that's an inherent risk. But it is, I think, a risk of human creation. The AI does not have a survival instinct.

      Energy and resources are limiting factors. The first might be solvable! But currently it serves as a failsafe against prolonged activity with which we do not agree.

      5 replies →

    • I think you are assuming it is goal seeking, goal seeking is mostly biological/conscious construct. A super intelligent species would likely want to preserve everything, because how are you super intelligent if you have destruction as your primary function instead of order.

      1 reply →

  • > The interface between a superintelligent AI and the physical world is a) optional, and b) tenuous.

    To begin with. Going forward, only if we make sure it remains so. Given the apparently overwhelming incentives to flood the online world with this sh...tuff already, what's to say there won't be forces -- people, corporations, nation-states -- working hard to make that interface as robust as possible?

It builds stuff? First they would have to do that over our dead bodies which means they already somehow able to build stuff without competing with us for resources, it’s a chicken or the egg problem you see?