← Back to context

Comment by rstuart4133

2 years ago

AGI does look like an unsolved problem right now, and a hard one at that. But I think it is wrong to think that it needs an AGI to cause total havoc.

I think my dyslexic namesake Prof Stuart Russell got it right. It humans won't need an AGI to dominate and kill each other. Mosquitoes have killed far more people than war. Ask yourself how long will it take us to develop a neutral network as smart as a mosquito, because that's all it will take.

It seems so simple, as the beastie only has 200,000 neurons. Yet I've been programming for over 4 decades and for most of them it was evident neither I nor any of my contemporaries were remotely capable of emulating it. That's still true if course. Never in my wildest dreams did it occur to me that repeated applications could produce something I couldn't, a mosquito brain. Now that looks imminent.

Now I don't know what to be more scared of. An AGI, or a artificial mosquito swarm run by Pol Pot.

Producing a mosquito brain is easy. Powering it with the Krebs cycle is much harder.

Yes you can power these things with batteries. But those are going to be a lot bigger than real mosquitos and have much shorter flight times.

But then, haven't we reached that point already with the development of nuclear weapons? I'm more scared of a lunatic (whether of North Korean, Russian, American, or any other nationality) being behind the "nuclear button" than an artificial mosquito swarm.

  • The problem is that strong AI is far more multipolar than nuclear technology and the ways in which it might interact with other technologies to create emergent threats is very difficult to forsee.

    And to be clear, I'm not talking about superintelligence, I'm talking about the models we have today.