Comment by Dumblydorr
17 hours ago
What would AGI actually mean for security? Does it heavily favor attackers or defenders? Even LLM, it may not help much in defense but it could teach attackers a lot right? What if employees gave the LLM info during their use that attackers could then get re-fed and study?
AGI favors attackers initially. Because while it can be used defensively, to preemptively scan for vulns, harden exposed software for cheaper and monitor the networks for intrusion at all times, how many companies are going to start doing that fast enough to counter the cutting edge AGI-enabled attackers probing every piece of their infra for vulns at scale?
It's like a very very big fat stack of zero days leaking to the public. Sure, they'll all get fixed eventually, and everyone will update, eventually. But until that happens, the usual suspects are going to have a field day.
It may come to favor defense in the long term. But it's AGI. If that tech lands, the "long term" may not exist.
Defending is much, much harder than attacking for humans, I'd extrapolate that to AI/AGIs.
Defender needs to get everything right, attacker needs to get one thing right.
Alternatively, one component of a superintelligence that makes it super might be a tiered mind that's capable of processing far more input streams simultaneously to get around the core human inadequacy here, that we can only really focus on one thing at a time.
The same way we can build "muscle memory" to delegate simple autonomous tasks, a super intelligence might be able to dynamically delegate to human level (or greater) level sub intelligences to vigilantly watch everything it needs to.
1 reply →
But security advancements scale.
On average, today's systems are much more secure than those from year 2005. Because the known vulns from those days got patched, and methodologies improved enough that they weren't replaced by newer vulns 1:1.
This is what allows defenders to keep up with the attackers long term. My concern is that AGI is the kind of thing that may result in no "long term".
At the end of the day AI at any level of capability is just automation - the machine doing something instead of a person.
Arguably this may change in the far distant future if we ever build something of significantly greater intelligence, or just capability, than a human, but today's AI is struggling to draw clock faces, so not quite there yet...
The thing with automation is that it can be scaled, which I would say favors the attacker, at least at this stage of the arms race - they can launch thousands of hacking/vulnerability attacks against thousands of targets, looking for that one chink in the armor.
I suppose the defenders could do the exact same thing though - use this kind of automation to find their own vulnerabilities before the bad guys do. Not every corporation, and probably extremely few, would have the skills to do this though, so one could imagine some government group (part of DHS?) set up to probe security/vulnerability of US companies, requiring opt-in from the companies perhaps?
My take on government APTs is that they are boutique shops that do highly targeted attacks, develop their own zero days which they don’t usually burn unless they have so many.., and are willing to take time to go undetected.
Criminal organizations take a different approach, much like spammers where they can purchase/rent c2 and other software for mass exploitation (eg ransomware). This stuff is usually very professionally coded and highly effective.
Botnets, hosting in various countries out of reach of western authorities, etc are all common tactics as well.
IMO AI favors attackers more than defenders, since it's cost prohibitive for defenders to code scan every version of every piece of software you use routinely for exploits, but not for attackers. Also, social exploits are time consuming, and AI is quite good at automating them, and these can take place outside your security perimeter, so you'll have no way of knowing.
There’s a report with Bruce Schneier that estimates GenAI tools have increased the profitability of phishing significantly [1]. They create emails with higher click through rates, and reduce the cost of delivering them.
Groups which were too unprofitable to target before, are now profitable.
[1] https://arxiv.org/abs/2412.00586?