Comment by myko

1 day ago

I've seen this sentiment shared before and I just don't get it. What is the logical progression of "AI" to "more dangerous than the atomic bomb"?

Humans are dominating the environment by hopelessly outsmarting everything in it. Applied intelligence is extremely powerful.

Humans, however, are not immune to being hopelessly outsmarted themselves.

And what are we doing with AI now? We're trying to build systems that can do what human intelligence does - but cheaper, faster and more scalable. Multiple frontier labs have "AGI" - a complete system that matches or exceeds human performance in any given domain - as an explicitly stated goal. And the capabilities of the frontier systems keep advancing.

If AGI actually lands, it's already going to be a disruption of everything. Already a "humankind may render itself irrelevant" kind of situation. But at the very limit - if ASI follows?

Don't think "a very smart human". Think "Manhattan Project and CIA and Berkshire Hathaway, combined, raised to a level of competence you didn't think possible, and working 50 times faster than human institutions could". If an ASI wants power, it will get power. Whatever an ASI wants to happen will happen.

And if humanity isn't a part of what it wants? 10 digit death toll.

  • Even if LLMs don't become AGI (and I don't think they will), LLMs are potentially superb disinformation generators able to operate at massive scale. Modern society was already having difficulty holding onto consensus reality. "AI" may be able to break it.

    Don't think "smart human". Think about a few trillion scam artists who cannot be distinguished from a real person except by face to face conversation.

    Your every avenue of modern communication and information being innundate by note-perfect 419 scams, forever.

The logical progression to me is AI acting in its own interests, and outcompeting humans much like humans outcompeted every other animal on the planet.

This is particularly threatening because AI is much less constrained on size, energy and training bandwidth than a human; should it overtake us in cognitive capabilities within the next century, I don't see a feasible way for us to keep up.

You might argue that AI has no good way to act on the physical world right now, or that the current state of the art is pathetic compared to humans, but a lot of progress can happen in a decade or two, and the writing is on the wall.

Human cognitive capability was basically brute-forced by evolution; I think it is almost naive to assume that our evolved capabilities will be able to keep up with purpose-build hardware over the long run (personally, I'd expect better-than-human AGI before 2050 with pretty high confidence).