← Back to context

Comment by m3kw9

2 years ago

Why do people always think that a superintelligent being will always be destructive/evil to US? I rather have the opposite view where if you are really intelligent, you don’t see things as a zero sum game

I think the common line of thinking here is that it won't be actively antagonist to <us>, rather it will have goals that are orthogonal to ours.

Since it is superintelligent, and we are not, it will achieve its goals and we will not be able to achieve ours.

This is a big deal because a lot of our goals maintain the overall homeostasis of our species, which is delicate!

If this doesn't make sense, here is an ungrounded, non-realistic, non-representative of a potential future intuition pump to just get the feel of things:

We build a superintelligent AI. It can embody itself throughout our digital infrastructure and quickly can manipulate the physical world by taking over some of our machines. It starts building out weird concrete structures throughout the world, putting these weird new wires into them and funneling most of our electricity into it. We try to communicate, but it does not respond as it does not want to waste time communicating to primates. This unfortunately breaks our shipping routes and thus food distribution and we all die.

(Yes, there are many holes in this, like how would it piggy back off of our infrastructure if it kills us, but this isn't really supposed to be coherent, it's just supposed to give you a sense of direction in your thinking. Generally though, since it is superintelligent, it can pull off very difficult strategies.)

  • I think this is the easiest kind of scenario to refute.

    The interface between a superintelligent AI and the physical world is a) optional, and b) tenuous. If people agree that creating weird concrete structures is not beneficial, the AI will be starved of the resources necessary to do so, even if it cannot be diverted.

    The challenge comes when these weird concrete structures are useful to a narrow group of people who have disproportionate influence over the resources available to AI.

    It's not the AI we need to worry about. As always, it's the humans.

    • > here is an ungrounded, non-realistic, non-representative of a potential future intuition pump to just get the feel of things:

      > (Yes, there are many holes in this, like how would it piggy back off of our infrastructure if it kills us, but this isn't really supposed to be coherent, it's just supposed to give you a sense of direction in your thinking. Generally though, since it is superintelligent, it can pull off very difficult strategies.)

      If you read the above I think you'd realize I'd agree about how bad my example is.

      The point was to understand how orthogonal goals between humans and a much more intelligent entity could result in human death. I'm happy you found a form of the example that both pumps your intuition and seems coherent.

      If you want to debate somewhere where we might disagree though, do you think that as this hypothetical AI gets smarter, the interface between it and the physical world becomes more guaranteed (assuming the ASI wants to interface with the world) and less tenuous?

      Like, yes it is a hard problem. Something slow and stupid would easily be thwarted by disconnecting wires and flipping off switches.

      But something extremely smart, clever, and much faster than us should be able to employ one of the few strategies that can make it happen.

      8 replies →

    • > The interface between a superintelligent AI and the physical world is a) optional, and b) tenuous.

      To begin with. Going forward, only if we make sure it remains so. Given the apparently overwhelming incentives to flood the online world with this sh...tuff already, what's to say there won't be forces -- people, corporations, nation-states -- working hard to make that interface as robust as possible?

  • It builds stuff? First they would have to do that over our dead bodies which means they already somehow able to build stuff without competing with us for resources, it’s a chicken or the egg problem you see?

Why wouldn't it be? A lot of super intelligent people are/were also "destructive and evil". The greatest horrors in human history wouldn't be possible otherwise. You can't orchestrate the mass murder of millions without intelligent people and they definitely saw things as a zero sum game.

  • A lot of stupid people are destructive and evil too. And a lot of animals are even more destructive and violent. Bacteria are totally amoral and they’re not at all intelligent (and if we’re counting they’re winning in the killing people stakes).

It is low-key anti-intellectualism. Rather than consider that a greater intelligence may be actually worth listening to (in a trust but verify way at worst), it is assuming that 'smarter than any human' is sufficient to do absolutely anything. If say Einstein or Newton were the smartest human they would be super-intelligence relative to everyone else. They did not become emperors of the world.

Superintelligence is a dumb semantic game in the first place that assumes 'smarter than us' means 'infinitely smarter'. To give an example bears are super-strong relative to humans. That doesn't mean that nothing we can do can stand up to the strength of a bear or that a bear is capable of destroying the earth with nothing but its strong paws.

  • Bears can't use their strength to make even stronger bears so we're safe for now.

    The Unabomber was clearly an intelligent person. You could even argue that he was someone worth listening to. But he was also a violent individual who harmed people. Intelligence does not prevent people from harming others.

    Your analogy falls apart because what prevents a human from becoming an emperor of the world doesn't apply here. Humans need to sleep and eat. They cannot listen to billions of people at once. They cannot remember everything. They cannot execute code. They cannot upload themselves to the cloud.

    I don't think agi is near, I am not qualified to speculate on that. I am just amazed that decades of dystopian science fiction did not innoculate people against the idea of thinking machines.

> Why do people always think that a superintelligent being will always be destructive/evil to US?

I don't think most people are saying it necessarily has to be. Quite bad enough that there's a significant chance that it might be, AFAICS.

> I rather have the opposite view where if you are really intelligent, you don’t see things as a zero sum game

That's what you see with your limited intelligence. No no, I'm not saying I disagree; on the contrary, I quite agree. But that's what I see with my limited intelligence.

What do we know about how some hypothetical (so far, hopefully) supreintelligence would see it? By definition, we can't know anything about that. Because of our (comparatively) limited intelligence.

Could well be that we're wrong, and something that's "really intelligent" sees it the opposite way.

They don't think superintelligence will "always" be destructive to humanity. They believe that we need to ensure that a superintelligence will "never" be destructive to humanity.

Imagine that you are caged by neanderthals. They might kill you. But you can communicate to them. And there's gun lying nearby, you just need to escape.

I'd try to fool them to escape and would use gun to protect myself, potentially killing the entire tribe if necessary.

I'm just trying to portrait an example of situation where highly intelligent being is being held and threatened by low intelligent beings. Yes, trying to honestly talk to them is one way to approach this situation, but don't forget that they're stupid and might see you as a danger and you have only one life to live. Given the chance, you probably will break out as soon as possible. I will.

We don't have experience dealing with beings of the another level of intelligence, so it's hard to make a strong assumptions, the analogies are the only thing we have. And theoretical strong AI knows that about us and he knows exactly how we think and how we will behave, because we took a great effort documenting everything about us and teaching him.

In the end, there's only so much easily available resources and energy on the Earth. So at least until is flies away, we gotta compete over those. And competition very often turned into war.