Comment by rglover

18 hours ago

A shockingly rare question to be asked. As best as I can tell, the biggest threat to civilization isn't AI, it's our culture of "that's not my problem" leaving otherwise patch-able holes in critical systems.

The biggest threat to civilization is, in fact, AI. It's a new problem, and one with an utterly ridiculous lethality at its limit. It makes the atomic bomb look benign.

"That's not my problem" is something humans have been dealing with since before they mastered the art of sharpening a stick.

  • Nah, it's global warming.

    Because fossil fuel is stupid useful and there's no way we're going to stop burning it. And then we get to the climate scenarios that aren't compatible with our current sophisticated civilization, even with the currently accepted climate science (that always seems to underestimate what actually happens).

    • Global warming just isn't harmful enough to pose a credible extinction risk.

      The damage is too limited and happens far too slowly. Even the unlikely upper end projections aren't enough to upend the human civilization - the harms up there at the top end are "worse than WW2", but WW2 sure didn't end humankind. At the same time: the ever-worsening economics of fossil fuel power put a bound on climate change even in a "no climate policy" world, which we are outperforming.

      It's like the COVID of global natural disasters. Harmful enough to be worth taking measures against. But just harmless enough that you could do absolutely nothing, and get away with it.

      The upper bound on AI risks is: total extinction of humankind.

      I fucking wish that climate change was the biggest threat as far as eye can see.

  • > "That's not my problem" is something humans have been dealing with since before they mastered the art of sharpening a stick.

    Yes, and that's why civilizations have kept rising and falling throughout history.

  • I've seen this sentiment shared before and I just don't get it. What is the logical progression of "AI" to "more dangerous than the atomic bomb"?

    • Humans are dominating the environment by hopelessly outsmarting everything in it. Applied intelligence is extremely powerful.

      Humans, however, are not immune to being hopelessly outsmarted themselves.

      And what are we doing with AI now? We're trying to build systems that can do what human intelligence does - but cheaper, faster and more scalable. Multiple frontier labs have "AGI" - a complete system that matches or exceeds human performance in any given domain - as an explicitly stated goal. And the capabilities of the frontier systems keep advancing.

      If AGI actually lands, it's already going to be a disruption of everything. Already a "humankind may render itself irrelevant" kind of situation. But at the very limit - if ASI follows?

      Don't think "a very smart human". Think "Manhattan Project and CIA and Berkshire Hathaway, combined, raised to a level of competence you didn't think possible, and working 50 times faster than human institutions could". If an ASI wants power, it will get power. Whatever an ASI wants to happen will happen.

      And if humanity isn't a part of what it wants? 10 digit death toll.

      4 replies →

    • The logical progression to me is AI acting in its own interests, and outcompeting humans much like humans outcompeted every other animal on the planet.

      This is particularly threatening because AI is much less constrained on size, energy and training bandwidth than a human; should it overtake us in cognitive capabilities within the next century, I don't see a feasible way for us to keep up.

      You might argue that AI has no good way to act on the physical world right now, or that the current state of the art is pathetic compared to humans, but a lot of progress can happen in a decade or two, and the writing is on the wall.

      Human cognitive capability was basically brute-forced by evolution; I think it is almost naive to assume that our evolved capabilities will be able to keep up with purpose-build hardware over the long run (personally, I'd expect better-than-human AGI before 2050 with pretty high confidence).