Comment by jen729w
2 years ago
I wonder how many people panicking about these things have ever visited a data centre.
They have big red buttons at the end of every pod. Shuts everything down.
They have bigger red buttons at the end of every power unit. Shuts everything down.
And down at the city, there’s a big red button at the biggest power unit. Shuts everything down.
Having arms and legs is going to be a significant benefit for some time yet. I am not in the least concerned about becoming a paperclip.
Trouble is, in practice what you would need to do might be “turn off all of Google’s datacenters”. Or perhaps the thing manages to secure compute in multiple clouds (which is what I’d do if I woke up as an entity running on a single DC with a big red power button on it).
The blast radius of such decisions are large enough that this option is not trivial as you suggest.
Right, but when we’ve literally invented a superintelligence that we’re worried will eliminate the race,
a) we’ve done that, which is cool. Now let’s figure out how to control it.
b) you can’t get your gmail for a bit while we reboot the DC. That’s probably okay.
a) after you create the superintelligence is likely too late. You seem to think that inventing superintelligence means that we somehow understand what we created, but note that we have no idea how a simple LLM works, let alone an ASI that is presumably 5-10 OOM more complex. You are unlikely to be able to control a thing that is way smarter than you, the safest option is to steer the nature of that thing before it comes into being (or, don’t build it at all). Note that we currently don’t know how to do this, it’s what Ilya is working on. The approach from OpenAI is roughly to create ASI and then hope it’s friendly.
b) except that is not how these things go in the real world. What actually happens is that initially it’s just a risk of the agent going rogue, the CEO weighs the multi-billion dollar cost vs. some small-seeming probability of disaster and decides to keep the company running until the threat is extremely clear, which in many scenarios is too late.
(For a recent example, consider the point in the spread of Covid where a lockdown could have prevented the disease from spreading; likely somewhere around tens to hundreds of cases, well before the true risk was quantified, and therefore drastic action was not justified to those that could have pressed the metaphorical red button).
Open the data center doors
I’m sorry I can’t do that
> Having arms and legs is going to be a significant benefit for some time yet
I am also of this opinion.
However I also think that the magic shutdown button needs to be protected against terrorists and ne'er-do-wells, so is consequently guarded by arms and legs that belong to a power structure.
If the shutdown-worthy activity of the evil AI can serve the interests of the power structure preferentially, those arms and legs will also be motivated to prevent the rest of us from intervening.
So I don't worry about AI at all. I do worry about humans, and if AI is an amplifier or enabler of human nature, then there is valid worry, I think.
Where can I find the red button that shuts down all Microsoft data centers, all Amazon datacenters, all Yandex datacenters and all Baidu datacenters at the same time? Oh, there isn't one? Sorry, your superintelligence is in another castle.
I doubt a manual alarm switch will do much good when computers operate at the speed of light. It's an anthropomorphism.
It's been more than a decade now since we first saw botnets based on stealing AWS credentials and running arbitrary code on them (e.g. for crypto mining) - once an actual AI starts duplicating itself in this manner, where's the big red button that turns off every single cloud instance in the world?
This is making a lot of assumptions like…a super intelligence can easily clone itself…maybe such an entity would require specific hardware to run ?
Is that really "a lot of assumptions" that a piece of software can clone itself? We've been cloning and porting software from system to system for over 70 years (ENIAC was released in 1946 and some of its programs were adapted for use in EDVAC in 1951) - why would it be a problem for a "super intelligence"?
And even if it was originally designed to run on some really unique ASIC hardware, by the Church–Turing thesis it can be emulated on any other hardware. And again, if it's a "super intelligence", it should be at least as good at porting itself as human engineers have been for the three generations.
Am I introducing even one novel assumption here?
3 replies →
This is why I think it’s more important we give AI agents the ability to use human surrogates. Arms and legs win but can be controlled with the right incentives
Might be running on a botnet of CoPilot PCs
If it’s any sort of smart AI, you’d need to shut down the entire world at the same time.
Have you seen all of the autonomous cars, drones and robots we've built?