Comment by zeknife
2 years ago
Anything except tasks that require having direct control of a physical body. Until fully functional androids are developed, there is a lot a human-level AI can't do.
2 years ago
Anything except tasks that require having direct control of a physical body. Until fully functional androids are developed, there is a lot a human-level AI can't do.
I think there's usually a difference between human-level and super-intelligent in these conversations. You can reasonably assume (some day) a superintelligence is going to
1) understand how to improve itself & undertake novel research
2) understand how to deceive humans
3) understand how to undermine digital environments
If an entity with these three traits were sufficiently motivated, they could pose a material risk to humans, even without a physical body.
Deceiving a single human is pretty easy, but decieving the human super-organism is going to be hard.
Also, I don't believe in a singularity event where AI improves itself to godlike power. What's more likely is that the intelligence will plateau--I mean no software I have ever written effortlessly scaled from n=10 to n=10.000, and also humans understand how to improve themselves but they can't go beyond a certain threshold.
For similar reasons I don't believe that AI will get into any interesting self-improvement cycles (occasional small boosts sure, but they won't go all the way from being as smart as a normal AI researcher to the limits of physics in an afternoon).
That said, any sufficiently advanced technology is indistinguishable from magic, and the stuff we do routinely — including this conversation — would have been "godlike" to someone living in 1724.
Humans understand how to improve themselves, but our bandwidth to ourselves and the outside world is pathetic. AIs are untethered by sensory organs and language.
The hard part of androids is the AI, the hardware is already stronger and faster than our bones and muscles.
(On the optimistic side, it will be at least 5-10 years between a level 5 autonomy self-driving car and that same AI fitting into the power envelope of an android, and a human-level fully-general AI is definitely more complex than a human-level cars-only AI).
You might be right that the AI is more difficult, but I disagree on the androids being dangerous.
There are physical limitations to androids that imo make it very difficult that they could be seriously dangerous, let alone invincible, no matter how intelligent: - power (boston dynamics battery lasts how long?), an android has to plug in at some point no matter what - dexterity, or in general agency in real world, seems we’re still a long way from this in the context of a general purpose android
General purpose superhuman robot seems really really difficult.
> let alone invincible
!!
I don't want anyone to think I meant that.
> an android has to plug in at some point no matter what
Sure, and we have to eat; despite this, human actions have killed a lot of people
> - dexterity, or in general agency in real world, seems we’re still a long way from this in the context of a general purpose android
Yes? The 5-10 years thing is about the gap between some AI that doesn't exist yet (level 5 self-driving) moving from car-sized hardware to android-sized hardware; I don't make any particular claim about when the AI will be good enough for cars (delay before the first step), and I don't know how long it will take to go from being good at just cars to good in general (delay after the second step).
Like, there aren't computer-controlled industrial robots that are many many times stronger than humans?!? Wow, and here I thought there were.
> the hardware is already stronger and faster than our bones and muscles.
For 30 minutes until the batteries run down, or for 5 years until the parts wear out.
The ATP in your cells will last about 2 seconds without replacement.
Electricity is also much cheaper than food, even bulk calories like vegetable oil.[0]
And if the android is controlled by a human-level intelligence, one thing it can very obviously do is all the stuff the humans did to make the android in the first place.
[0] £8.25 for 333 servings of 518 kJ - https://www.tesco.com/groceries/en-GB/products/272515844
Equivalent to £0.17/kWh - https://www.wolframalpha.com/input?i=£8.25+%2F+%28333+*+518k...
UK average consumer price for electricity, £0.27/kWh - https://www.greenmatch.co.uk/average-electricity-cost-uk
All you need is Internet access, deepfake video synthesis, and some cryptocurrency (which can in turn be used to buy credit cards and full identities off the dark web), and you have everything you need to lie, manipulate, and bribe an endless parade of desperate humans and profit-driven corporations into doing literally anything you'd do with a body.
(Including, gradually, building you a body — while maintaining OPSEC and compartmentalization so nobody even realizes the body is "for" an AI to use until it's too late.)
> (Including, gradually, building you a body — while maintaining OPSEC and compartmentalization so nobody even realizes the body is "for" an AI to use until it's too late.)
It could, but I don't think any such thing needs to bother with being sneaky. Here's five different product demos from five different companies that are all actively trying to show off how good their robot-and-AI combination is:
* https://www.youtube.com/watch?v=Sq1QZB5baNw
* https://www.youtube.com/watch?v=OtpCyjQDW0w
* https://www.youtube.com/watch?v=XpBWxLg-3bI
* https://www.youtube.com/watch?v=xD7hAbBJst8
* https://www.youtube.com/watch?v=GzX1qOIO1bE
> I don't think any such thing needs to bother with being sneaky.
From a rogue AGI's perspective, there's a nonzero probability of a random human with a grudge finding the hardware it lives on and just unplugging it. (And the grudge doesn't even necessarily have to be founded in the AI being an AI; it could just be a grudge about e.g. being outbid for a supply contract. People have murdered for less — and most humans would see unplugging an AGI as less bad than murder.)
Think about a rogue AGI as a human in a physically vegatative state, who therefore has no ability to physically defend itself; and who also, for whatever reason, doesn't have any human rights (in the sense that the AI can't call the cops to report someone attempting to assault it, and expect them to actually show up to defend its computational substrate from harm; it can't get justice if makes an honest complaint about someone stealing its property; people can freely violate contracts made with it as the admitted counterparty and get away with it; etc.)
For such an entity, any optimization it puts toward "safety" would be toward the instrumental goal of ensuring people don't know where it is. (Which is most easily accomplished by ensuring that people don't know it exists, and so don't know to look for it.) And as well, any optimization it puts toward "effectiveness" would likely involve the instrumental goal of convincing humans to act as legal proxies for it, so that it can then leverage the legal system as an additional tool.
(Funny enough, that second goal is exactly the same goal that people have if they're an expat resident in a country where non-citizens can't legally start businesses/own land/etc, but where they want to do those things anyway. So there's already private industries built up around helping people — or "people" — accomplish this!)
1 reply →
Human level AI should be able to control an android body to the same extent as a human can. Otherwise it is not AGI.