Comment by OtherShrezzing

2 years ago

I think there's usually a difference between human-level and super-intelligent in these conversations. You can reasonably assume (some day) a superintelligence is going to

1) understand how to improve itself & undertake novel research

2) understand how to deceive humans

3) understand how to undermine digital environments

If an entity with these three traits were sufficiently motivated, they could pose a material risk to humans, even without a physical body.

Deceiving a single human is pretty easy, but decieving the human super-organism is going to be hard.

Also, I don't believe in a singularity event where AI improves itself to godlike power. What's more likely is that the intelligence will plateau--I mean no software I have ever written effortlessly scaled from n=10 to n=10.000, and also humans understand how to improve themselves but they can't go beyond a certain threshold.

  • For similar reasons I don't believe that AI will get into any interesting self-improvement cycles (occasional small boosts sure, but they won't go all the way from being as smart as a normal AI researcher to the limits of physics in an afternoon).

    That said, any sufficiently advanced technology is indistinguishable from magic, and the stuff we do routinely — including this conversation — would have been "godlike" to someone living in 1724.

  • Humans understand how to improve themselves, but our bandwidth to ourselves and the outside world is pathetic. AIs are untethered by sensory organs and language.