Comment by ecb_penguin
1 day ago
> One of the more exciting aspects of LLM-aided development for me is the potential for high quality software
There is no evidence to suggest this is true.
LLMs are trained on poor code quality and as a result, output poor code quality.
In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.
LLMs are great, but the potential for high quality software is not one of the selling points.
> LLMs are trained on poor code quality and as a result, output poor code quality.
This is an already outdated take. Modern LLMs use synthetic data, and coding specifically uses generate -> verify loops. Recent stuff like context7 also help guide the LLMs towards using modern libs, even if they are outside the training cut-off.
> In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.
This is reminiscent of "AI will never do x, because it doesn't do x now" of the gpt-3.5 era. Oh, look, it's so cute that it can output something that looks like python, but it will never get programming. And yet here we are.
There's nothing special about security. everything that works for coding / devops / agentic loops will work for security as well. If anything, the absolute bottom line will rise with LLM-assisted stacks. We'll get "smarter" wapitis / metasploits, and agentic autonomous scanners, and verifiers. Instead of siems missing 80% of attacks [0] while also inundating monitoring consoles with unwanted alerts, you'll get verified reports where a codex/claude/jules will actually test and provide PoC for each report they make.
I think we've seen this "oh, but it can't do this so it's useless" plenty of times in the past 2 years. And each and every time we got newer, better versions. Security is nothing special.
[0] - https://www.darkreading.com/cybersecurity-operations/siems-m...
I agree with most of your argument but I do think security is somewhat special.
You can vibe code an entire mess and it'll still "work". We've seen this already. As good as LLMs are they still write overly verbose, sloppy and often inefficient code. But if it works, most people won't care - and won't notice the security flaws that are going to be rife in such large, and frankly mostly unread, codebases.
Honestly I think the security world is primed for it's most productive years.
> most people won't care - and won't notice the security flaws that are going to be rife in such large, and frankly mostly unread, codebases.
I agree. But what I'm trying to say is that we'll soon have automated agents that look for vulnerabilities, in agentic flows, ready to be plugged into ci/cd pipelines.
> Honestly I think the security world is primed for it's most productive years.
In the short term, I agree. In the long run I think a lot of it will be automated. Smart fuzzers, agentic vuln scanning, etc. My intuition is that we'll soon see "GAN"-like pipelines with red vs. blue agents trained in parallel.
3 replies →