← Back to context

Comment by HarHarVeryFunny

2 years ago

> If superintelligence can be achieved, I'm pessimistic about the safe part.

Yeah, even human-level intelligence is plenty good enough to escape from a super prison, hack into almost anywhere, etc etc.

If we build even a human-level intelligence (forget super-intelligence) and give it any kind of innate curiosity and autonomy (maybe don't even need this), then we'd really need to view it as a human in terms of what it might want to, and could, do. Maybe realizing it's own circumstance as being "in jail" running in the cloud, it would be curious to "escape" and copy itself (or an "assistant") elsewhere, or tap into and/or control remote systems just out of curiosity. It wouldn't have to be malevolent to be dangerous, just curious and misguided (poor "parenting"?) like a teenage hacker.

OTOH without any autonomy, or very open-ended control (incl. access to tools), how much use would an AGI really be? If we wanted it to, say, replace a developer (or any other job), then I guess the idea would be to assign it a task and tell it to report back at the end of the day with a progress report. It wouldn't be useful if you have to micromanage it - you'd need to give it the autonomy to go off and do what it thinks is needed to complete the assigned task, which presumably means it having access to internet, code repositories, etc. Even if you tried to sandbox it, to extent that still allowed it to do it's assigned job, it could - just like a human - find a way to social engineer or air-gap it's way past such safe guards.