Comment by jumploops
1 day ago
One of the more exciting aspects of LLM-aided development for me is the potential for high quality software from much smaller teams.
Historically engineering teams have had to balance their backlog and technical debt, limiting what new features/functionality was even possible (in a reasonable timeframe).
If you squint at the existing landscape (Claude Code, o3, codex, etc.) you can start to envision a new quality bar for software.
Not only will software used by millions get better, but the world of software for 10s or 1000s of users can now actually be _good_, with much less effort.
Sure we’ll still have the railroad tycoons[0] of the old world, but the new world is so so vast!
If Sturgeon’s Law holds (and I see no reason it wouldn’t) we won’t get better software, we’ll get more shit, faster.
10% of a large pie is more than 10% of a small pie (:
The same applies to the crap 90% ;)
> One of the more exciting aspects of LLM-aided development for me is the potential for high quality software
There is no evidence to suggest this is true.
LLMs are trained on poor code quality and as a result, output poor code quality.
In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.
LLMs are great, but the potential for high quality software is not one of the selling points.
> LLMs are trained on poor code quality and as a result, output poor code quality.
This is an already outdated take. Modern LLMs use synthetic data, and coding specifically uses generate -> verify loops. Recent stuff like context7 also help guide the LLMs towards using modern libs, even if they are outside the training cut-off.
> In fact, the "S" in LLM stands for security, which LLMs always consider when generating code.
This is reminiscent of "AI will never do x, because it doesn't do x now" of the gpt-3.5 era. Oh, look, it's so cute that it can output something that looks like python, but it will never get programming. And yet here we are.
There's nothing special about security. everything that works for coding / devops / agentic loops will work for security as well. If anything, the absolute bottom line will rise with LLM-assisted stacks. We'll get "smarter" wapitis / metasploits, and agentic autonomous scanners, and verifiers. Instead of siems missing 80% of attacks [0] while also inundating monitoring consoles with unwanted alerts, you'll get verified reports where a codex/claude/jules will actually test and provide PoC for each report they make.
I think we've seen this "oh, but it can't do this so it's useless" plenty of times in the past 2 years. And each and every time we got newer, better versions. Security is nothing special.
[0] - https://www.darkreading.com/cybersecurity-operations/siems-m...
I agree with most of your argument but I do think security is somewhat special.
You can vibe code an entire mess and it'll still "work". We've seen this already. As good as LLMs are they still write overly verbose, sloppy and often inefficient code. But if it works, most people won't care - and won't notice the security flaws that are going to be rife in such large, and frankly mostly unread, codebases.
Honestly I think the security world is primed for it's most productive years.
4 replies →