Comment by Sharlin
2 years ago
> You can make a safe AI today, but what happens when the next person is managing things?
The point of safe superintelligence, and presumably the goal of SSI Inc., is that there won't be a next (biological) person managing things afterwards. At least none who could do anything to build a competing unsafe SAI. We're not talking about the banal definition of "safety" here. If the first superintelligence has any reasonable goal system, its first plan of action is almost inevitably going to be to start self-improving fast enough to attain a decisive head start against any potential competitors.
I wonder how many people panicking about these things have ever visited a data centre.
They have big red buttons at the end of every pod. Shuts everything down.
They have bigger red buttons at the end of every power unit. Shuts everything down.
And down at the city, there’s a big red button at the biggest power unit. Shuts everything down.
Having arms and legs is going to be a significant benefit for some time yet. I am not in the least concerned about becoming a paperclip.
Trouble is, in practice what you would need to do might be “turn off all of Google’s datacenters”. Or perhaps the thing manages to secure compute in multiple clouds (which is what I’d do if I woke up as an entity running on a single DC with a big red power button on it).
The blast radius of such decisions are large enough that this option is not trivial as you suggest.
Right, but when we’ve literally invented a superintelligence that we’re worried will eliminate the race,
a) we’ve done that, which is cool. Now let’s figure out how to control it.
b) you can’t get your gmail for a bit while we reboot the DC. That’s probably okay.
1 reply →
Open the data center doors
I’m sorry I can’t do that
> Having arms and legs is going to be a significant benefit for some time yet
I am also of this opinion.
However I also think that the magic shutdown button needs to be protected against terrorists and ne'er-do-wells, so is consequently guarded by arms and legs that belong to a power structure.
If the shutdown-worthy activity of the evil AI can serve the interests of the power structure preferentially, those arms and legs will also be motivated to prevent the rest of us from intervening.
So I don't worry about AI at all. I do worry about humans, and if AI is an amplifier or enabler of human nature, then there is valid worry, I think.
Where can I find the red button that shuts down all Microsoft data centers, all Amazon datacenters, all Yandex datacenters and all Baidu datacenters at the same time? Oh, there isn't one? Sorry, your superintelligence is in another castle.
I doubt a manual alarm switch will do much good when computers operate at the speed of light. It's an anthropomorphism.
It's been more than a decade now since we first saw botnets based on stealing AWS credentials and running arbitrary code on them (e.g. for crypto mining) - once an actual AI starts duplicating itself in this manner, where's the big red button that turns off every single cloud instance in the world?
This is making a lot of assumptions like…a super intelligence can easily clone itself…maybe such an entity would require specific hardware to run ?
4 replies →
This is why I think it’s more important we give AI agents the ability to use human surrogates. Arms and legs win but can be controlled with the right incentives
Might be running on a botnet of CoPilot PCs
If it’s any sort of smart AI, you’d need to shut down the entire world at the same time.
Have you seen all of the autonomous cars, drones and robots we've built?
> there won't be a next (biological) person managing things afterwards. At least none who could do anything to build a competing unsafe SAI
This pitch has Biblical/Evangelical resonance, in case anyone wants to try that fundraising route [1]. ("I'm just running things until the Good Guy takes over" is almost a monarchic trope.)
[1] https://biblehub.com/1_corinthians/15-24.htm