Comment by ACCount37

19 hours ago

AGI favors attackers initially. Because while it can be used defensively, to preemptively scan for vulns, harden exposed software for cheaper and monitor the networks for intrusion at all times, how many companies are going to start doing that fast enough to counter the cutting edge AGI-enabled attackers probing every piece of their infra for vulns at scale?

It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.

It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.

Defending is much, much harder than attacking for humans, I'd extrapolate that to AI/AGIs.

Defender needs to get everything right, attacker needs to get one thing right.

  • Alternatively, one component of a superintelligence that makes it super might be a tiered mind that's capable of processing far more input streams simultaneously to get around the core human inadequacy here, that we can only really focus on one thing at a time.

    The same way we can build "muscle memory" to delegate simple autonomous tasks, a super intelligence might be able to dynamically delegate to human level (or greater) level sub intelligences to vigilantly watch everything it needs to.

    • I automatically assume this to be the case, but I guess a lot of people don't. They imagine ASI as something like "an extremely smart human", not "an entire civilization worth of intelligence, attention and effort".

      One of the most intuitive pathway to ASI is that AGI eventually gets incredibly good at improving AGI. And a system like this would be able to craft and direct stripped down AI subsystems.

  • But security advancements scale.

    On average, today's systems are much more secure than those from year 2005. Because the known vulns from those days got patched, and methodologies improved enough that they weren't replaced by newer vulns 1:1.

    This is what allows defenders to keep up with the attackers long term. My concern is that AGI is the kind of thing that may result in no "long term".