← Back to context

Comment by NitpickLawyer

1 day ago

> LLMs are trained on poor code quality and as a result, output poor code quality.

This is an already outdated take. Modern LLMs use synthetic data, and coding specifically uses generate -> verify loops. Recent stuff like context7 also help guide the LLMs towards using modern libs, even if they are outside the training cut-off.

> In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.

This is reminiscent of "AI will never do x, because it doesn't do x now" of the gpt-3.5 era. Oh, look, it's so cute that it can output something that looks like python, but it will never get programming. And yet here we are.

There's nothing special about security. everything that works for coding / devops / agentic loops will work for security as well. If anything, the absolute bottom line will rise with LLM-assisted stacks. We'll get "smarter" wapitis / metasploits, and agentic autonomous scanners, and verifiers. Instead of siems missing 80% of attacks [0] while also inundating monitoring consoles with unwanted alerts, you'll get verified reports where a codex/claude/jules will actually test and provide PoC for each report they make.

I think we've seen this "oh, but it can't do this so it's useless" plenty of times in the past 2 years. And each and every time we got newer, better versions. Security is nothing special.

[0] - https://www.darkreading.com/cybersecurity-operations/siems-m...

I agree with most of your argument but I do think security is somewhat special.

You can vibe code an entire mess and it'll still "work". We've seen this already. As good as LLMs are they still write overly verbose, sloppy and often inefficient code. But if it works, most people won't care - and won't notice the security flaws that are going to be rife in such large, and frankly mostly unread, codebases.

Honestly I think the security world is primed for it's most productive years.

  • > most people won't care - and won't notice the security flaws that are going to be rife in such large, and frankly mostly unread, codebases.

    I agree. But what I'm trying to say is that we'll soon have automated agents that look for vulnerabilities, in agentic flows, ready to be plugged into ci/cd pipelines.

    > Honestly I think the security world is primed for it's most productive years.

    In the short term, I agree. In the long run I think a lot of it will be automated. Smart fuzzers, agentic vuln scanning, etc. My intuition is that we'll soon see "GAN"-like pipelines with red vs. blue agents trained in parallel.

    • "Looking for vulnerabilities" is not really a core part of creating secure software. That part of the infosec trashfi^Windustry is all about already deployed software.

      You can only get somewhere close to creating secure software by constructing something that is secure by design. Think narrow-interface sandboxes and encoding visibility scopes into types, not "scan for known bad things".

    • If the solution to all problems with attaching gpu farms to our workflows is to attach more gpu farms to our workflows, I can't see how this isn't just an elaborate scam.