Comment by Animats
16 hours ago
We have a huge problem.
The US is at war. Much of the world is at war at the cyber attack level right now. The US, the EU, most of the Middle East, Israel, Russia... Major services have been attacked and have gone down for days at a time - Ubuntu, Github, Let's Encrypt, Stryker. Entire hospital systems have had to partially shut down.
Now, in the middle of this, AI has made attacks much faster to generate. Faster than the defensive side can respond. Zero-day attacks used to be rare. Now they're normal.
It's going to get worse before it gets better. Maybe much worse.
> before it gets better
How is it going to get better?
If we assume that there will be an AI that is perfect in terms of ability to find vulnerabilities, cheap to run and widely available to everyone, then anyone can run it on any piece of software before deploying it. All vulnerabilities get found before they can be exploited.
One of the big challenges with cybersecurity is that attackers only need to find one exploit, while defenders need to stop everything. When you have a large surface area and limited resources, it's much easier to be the side that only has to succeed once. AI eliminates the limited resources problem.
> If we assume that there will be an AI that is perfect in terms of ability to find vulnerabilities
...so if we assume a halting oracle?
I'd speculate that at this point Linux etc are probably having vulnerabilities discovered and patched faster than created.
It's not only Linux though and many projects don't have the funding to perpetually use something like Mythos.
Right now we are at a point in time when AI can find bugs for attackers and defenders, but defenders did not fix/find those bugs yet.
In time most of the bugs AI can find will be fixed, and things will calm down. Some bugs will be left, but will be too complex to find and weaponise (or rarely).
Alin short, attackers have advantage for a brief time now, but ultimately defenders will win. I guess this "fight" might be over before the end of the year.
1) Make it a law that companies have to vet their code for security holes before release, 2) Make it a law that companies have to apply operational security best practice on their software products/services, 3) Industry standard automation for improvements to patch lifecycle management, 4) Auditing for critical businesses and industries to ensure safety (both as a national security thing and general safety/reliability/privacy/etc)
Right now all that stuff is optional, so most companies don't do it, which makes more security holes and it takes longer to patch.
Basically make software development so legally risky that only multi-billion dollar corporations will ever engage in it.
5 replies →
Downplaying security has now real coencequences for everyone.
Bulk rewrites of everything into Rust with AI assistance?
I am looking at the results of a mass vulnerability scan as I type this. Half of the bugs in one case are in fact (binary) parser errors for hand-written parsers. These really should not exist in any language - but in C it's particularly bad. Kaitai Struct or something similar would broadly have prevented these. Rust would help here, but less than a parser generator (because it could automate error checking insertion for things that aren't just out of bound access).
However, half of the vulnerabilities are logic errors in terms of what I would call RBAC enforcement, incorrect access permissions, and so on. Rust won't help at all with any of these.
2 replies →
Rust is overly complex and difficult, Go is simpler and easier and has the memory protection people are obsessed with