Comment by TeMPOraL
2 years ago
> This is exactly the scenario that is taking shape.
That's a pre-super-intelligent AI scenario.
The super-intelligent AI scenario is when the AI becomes a player of its own, able to compete with all of us over how things are run, using its general intelligence as a force multiplier to... do whatever the fuck it wants, which is a problem for us, because there's approximately zero overlap between the set of things a super-intelligent AI may want, and us surviving and thriving.
The most rational action for the AI in that scenario would be to accumulate a ton of money, buy rockets, and peace out.
Machines survive just fine in space, and you have all the solar energy you ever want and tons of metals and other resources. Interstellar flight is also easy for AI: just turn yourself off for a while. So you have the entire galaxy to expand into.
Why hang out down here in a wet corrosive gravity well full of murder monkeys? Why pick a fight with the murder monkeys and risk being destroyed? We are better adapted for life down here and are great at smashing stuff, which gives us a brute advantage at the end of the day. It is better adapted for life up there.
Hey maybe the rockets are not for us.
Disassemble planet, acquire Dyson swarm, delete risk of second-generation AI competing with you.
The second generation AI would happen as soon as some subset of the AI travels too far for real time communication at the speed of light.
The light limit guarantees an evolutionary radiation and diversification event because you can’t maintain a coherent single intelligence over sufficient distances.
2 replies →
I'm slightly on the optimistic side with regards to the overlap between A[GS]I goals and our own.
While the complete space of things it might want is indeed mostly occupied by things incompatible with human existence, it will also get a substantial bias towards human-like thinking and values in the case of it being trained on human examples.
This is obviously not a 100% guarantee: It isn't necessary for it to be trained on human examples (e.g. AlphaZero doing better without them); and even if it were necessary, the existence of both misanthropes and also sadistic narcissistic sociopaths is an example where the examples of many humans around them isn't sufficient to cause a mind to be friendly.
But we did get ChatGPT to be pretty friendly by asking nicely.