Comment by mywacaday
2 years ago
Is safe AI really such a genie out of the bottle problem? From a non expert point of view a lot of hype just seems to be people/groups trying to stake their claim on what will likely be a very large market.
2 years ago
Is safe AI really such a genie out of the bottle problem? From a non expert point of view a lot of hype just seems to be people/groups trying to stake their claim on what will likely be a very large market.
A human-level AI can do anything that a human can do (modulo did you put it into a robot body, but lots of different groups are already doing that with current LLMs).
Therefore, please imagine the most amoral, power-hungry, successful sociopath you've ever heard of. Doesn't matter if you're thinking of a famous dictator, or a religious leader, or someone who never got in the news and you had the misfortune to meet in real life — in any case, that person is/was still a human, and a human-level AI can definitely also do all those things unless we find a way to make it not want to.
We don't know how to make an AI that definitely isn't that.
We also don't know how to make an AI that definitely won't help someone like that.
> We also don't know how to make an AI that definitely won't help someone like that.
"...offices in Palo Alto and Tel Aviv, where we have deep roots..."
Hopefully, SSI holds its own.
Anything except tasks that require having direct control of a physical body. Until fully functional androids are developed, there is a lot a human-level AI can't do.
I think there's usually a difference between human-level and super-intelligent in these conversations. You can reasonably assume (some day) a superintelligence is going to
1) understand how to improve itself & undertake novel research
2) understand how to deceive humans
3) understand how to undermine digital environments
If an entity with these three traits were sufficiently motivated, they could pose a material risk to humans, even without a physical body.
3 replies →
The hard part of androids is the AI, the hardware is already stronger and faster than our bones and muscles.
(On the optimistic side, it will be at least 5-10 years between a level 5 autonomy self-driving car and that same AI fitting into the power envelope of an android, and a human-level fully-general AI is definitely more complex than a human-level cars-only AI).
5 replies →
All you need is Internet access, deepfake video synthesis, and some cryptocurrency (which can in turn be used to buy credit cards and full identities off the dark web), and you have everything you need to lie, manipulate, and bribe an endless parade of desperate humans and profit-driven corporations into doing literally anything you'd do with a body.
(Including, gradually, building you a body — while maintaining OPSEC and compartmentalization so nobody even realizes the body is "for" an AI to use until it's too late.)
3 replies →
Human level AI should be able to control an android body to the same extent as a human can. Otherwise it is not AGI.
> power-hungry
That has nothing to do with intelligence.
That's why it's a problem.
An AI can be anywhere on that axis, and we don't really know what we're doing in order to prevent it being as I have described.