Comment by freeqaz

13 hours ago

This is exactly what happened with Log4Shell.

Day -X + 1: Engineer at Alibaba finds the vuln and tells Apache. Patch is pushed to git while new release is coordinated.

Day -X: A black hat sees commits fixing the bug. Attacks start happening.

Day 0: Memes start circulating in Minecraft communities of people crashing servers. Some logs are shared on Twitter, especially in China, of people getting pwned.

Day 0 + ~4 hours: My friend DMs me a meme on Twitter. I look up to find the CVE. Doesn't exist. My friend and I reproduce the exploit and write up a blog post about it. (We name it Log4Shell to differentiate it from a different, older log4j RCE vuln)

Day ~1: Media starts picking it up. Apache is forced to release patches faster in response. CVE is actually published to properly allow security scanners to identify it.

Today: AI makes this happen faster and more consistently. Patches probably should be kept private until a coordinated disclosure happens post-testing and CVE being published?

Hard to say what the right move is, but this is gonna be happening a lot over the next 1-3 years. Lots of companies are going to be getting cooked until AI helps us patch faster than attackers can exploit these fresh 0-days.

I’m with you until that last sentence, which I’ve been thinking about as “… until AI code testing, vulnerability scanning, and developer support tools help to limit the number of 0-days and vulnerabilities making it into production”.

So prevention will be more important than ai-assisted rapid containment or patching, though both of those capabilities will be necessary as part of defense in depth.

And some sort of AI-enabled security analysis across the organization’s architecture that is done as part of testing ahead of new software entering production to ID potential vulnerabilities caused by configuration changes or upgrades that modify how systems interact with each other.

I’ve been trying to guess the timeframe for seeing improved secure development, but I’m hoping it’s a bit closer to 6 months - 1 year given the speed of AI adoption and AI progression. May be closer to 3 years as you stated.

In the meantime, is there more to be done than this (not in order)?

- Patch COTS software

- re-evaluate the scoring for previous vulnerabilities

- set up up containment measures capabilities for systems that can’t be patched / high risk vendors

- use frontier model vuln scanning and patching for home grown systems that may have more 0-days than COTS depending on the organization’s capability

- limit the number of vendors / simplifying the tech stack.

I’d be happy to hear how others are thinking about this.

  • we simply can't absolve ourselves of responsibility in input and expect a hardened output. It's ABSOLUTELY up to the engineers to have test harnesses and scenarios for testing, vulnerability scanning, etc. Just because we can move faster via prompts doesn't mean we neglect the SDLC.

    I think there's opportunity to reinvent the pipeline with AI powered tools to assist but the onus is still on the person to ensure they are deploying something that has been tested.