← Back to context

Comment by martinald

8 hours ago

This really is quite scary.

I suspect this year we are going to see a _lot_ more of this.

While it's good these bugs are being found and closed, the problem is two fold

1) It takes time to get the patches through distribution 2) the vast majority of projects are not well equipped to handle complex security bugs in a "reasonable" time frame.

2 is a killer. There's so much abandonware out there, either as full apps/servers or libraries. These can't ever really be patched. Previously these weren't really worth spending effort on - might have a few thousand targets of questionable value.

Now you can spin up potentially thousands of exploits against thousands of long tail services. In aggregate this is millions of targets.

And even if this case didn't exist it's going to be difficult to patch systems quickly enough. Imagine an adversary that can drip feed zero days against targets.

Not really sure how this can be solved. I guess you'd hope that the good guys can do some sort of mega patch against software quicker than bad actors.

But really as the npm debacle showed the industry is not in a good place when it comes to timely secure software delivery even without millions of potential new zero days flying around.

> 2 is a killer. There's so much abandonware out there, either as full apps/servers or libraries. These can't ever really be patched. Previously these weren't really worth spending effort on - might have a few thousand targets of questionable value.

It's worse than that. In before, operator of a system could upgrade distro's openssl version, restart service and it was pretty much done. Even if it was 3rd party vendor app at the very least you can provide security updates for the shared libs

Nowadays, where everything runs containers, you now have to make sure every single vendor you take containers from did that update

It would help if regular userspace software wasn't written in languages that were primarly designed to write portable OS kernels.

Even if not all logic errors can be prevented, some of them keep happening by using the wrong tools.

> the problem is two fold

No, the biggest problem at the root of all this is complexity. OpenSSL is a garbled mess. No matter AI or not, such software should not be the security backbone of the internet.

People writing and maintaining software need to optimize for simplicity, readibility, maintainability. Whether they use an LLM to achieve that is seconday. The humans in the loop must understand what's going on.

  • > People writing and maintaining software need to optimize for simplicity, readibility, maintainability. Whether they use an LLM to achieve that is seconday. The humans in the loop must understand what's going on.

    In a perfect world that is.

There’s a reason multiple projects popped up to replace OpenSSL after Heartbleed was discovered.

Let’s see them to do this on projects with a better historical track record.

It's good these bugs are being found and closed. The problems have nothing to do with AI, unless I'm missing something.

  • If people can use AI to find bugs to close them, people can use AI to find bugs to exploit them. The scale has changed.

  • Picture the traumatized Mr. Incredible meme with the text "lowering the barrier means more exploits are found"