Comment by asadjb
7 hours ago
Unfortunately I have started to feel that using AI to code - even with a well designed spec, ends up with code that; in the authors words, looks like
> [Agents write] units of changes that look good in isolation.
I have only been using agents for coding end-to-end for a few months now, but I think I've started to realise why the output doesn't feel that great to me.
Like you said; "it's my job" to create a well designed code base.
Without writing the code myself however, without feeling the rough edges of the abstractions I've written, without getting a sense of how things should change to make the code better architected, I just don't know how to make it better.
I've always worked in smaller increments, creating the small piece I know I need and then building on top of that. That process highlights the rough edges, the inconsistent abstractions, and that leads to a better codebase.
AI (it seems) decides on a direction and then writes 100s of LOC at one. It doesn't need to build abstractions because it can write the same piece of code a thousand times without caring.
I write one function at a time, and as soon I try to use it in a different context I realise a better abstraction. The AI just writes another function with 90% similar code.
The old classic mantra is "work smarter, not harder". LLMs are perfect for "work harder". They can produce bulk numbers of lines. They can help you brute force a problem space with more lines of code.
We expect the spec writing and prompt management to cover the "work smarter" bases, but part of the work smarter "loop" is hitting those points where "work harder" is about to happen, where you know you could solve a problem with 100s or 1000s of lines of code, pausing for a bit, and finding the smarter path/the shortcut/the better abstraction.
I've yet to see an "agentic loop" that works half as well as my well trained "work smarter loop" and my very human reaction to those points in time of "yeah, I simply don't want to work harder here and I don't think I need hundreds more lines of code to handle this thing, there has to be something smarter I can do".
In my opinion, the "best" PRs delete as much or more code than they add. In the cleanest LLM created PRs I've never seen an LLM propose a true removal that wasn't just "this code wasn't working according to the tests so I deleted the tests and the code" level mistakes.
The used to be a saying of "the best programmers are lazy" - I think the opposite is now true
I don't see why you can't use your approach of writing one function at a time, making it work in the context and then moving on with AI. Sure you can't tell it to do all that in one step but personally I really like not dealing with the boilerplate stuff and worrying more about the context and how to use my existing functions in different places