Comment by IceDane
20 hours ago
It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.
20 hours ago
It's staggering to me how many times I've heard this argument that LLMs are just the next level of abstraction. Some people are even comparing them to compilers.
As much as I use AI, even for coding, I really do not like the argument. They are too chaotic to be compilers. The descent from prompt to code has far too many branches, and even small requests begin to build up bad patterns.
There is some fun to consider when sufficiently advanced AI allows this in areas where we are okay with things going wrong, but that seems a very limited domain for fun and games and not for serious software that needs to be correct as possible.
I can see vibe coding building very simple systems, and it likely will get better with systems that are one off throw aways where edge cases don't matter because we have a one off need of turning input X into output Y, but when it comes to people using AI in systems where correctness matters, long term support must be provided, and ease of adding new functionality is a serious consideration, it seems we are as far from having prompt as code as we are from AGI.
> Some people are even comparing them to compilers.
A lot of people are using them as such too: the amount of people talking about "my fleets of agents working on 4 different projects": they aren't reviewing that output. They say they are, but they aren't, anymore than I review the LLVM IR. It makes me feel like I'm in some fantasy land: I watch Opus 4.7 get things consistently backwards at the margins, mess up, make bugs: we wouldn't accept a compiler that did any of this at this scale or level lol
Right? People have put in decades of work to make them extremely reliable, they didn't magically start like that.
It's awful, and seeing even engineers I respected become so AI pilled they're shipping slop without review has made me lose respect for them. It also can't help but make me wonder: what am I missing? Am I holding it wrong? Am I too focused on irrelevant details?
So far, my conclusion is that while LLMs can be s productivity boost, you have to direct them carefully. They don't really care about friction and bad abstractions in your codebase and will happily keep piling cards on top of the crooked house of cards they've generated.
Just like before AI, you need a cycle of building and refactoring running on repeat with careful reviews. Otherwise you will end up with something that even an LLM will have a hard time working in.