Comment by felipeerias

15 hours ago

This misunderstands what LLM-based tools mean for complex software projects. Nobody expects that you should be able to ask them to write you a whole kernel or a web engine.

Coding agents in particular can be very helpful for senior engineers as a way to carry out investigations, double-check assumptions, or automate the creation of some parts of the code.

One key point is to use their initial output as a draft, as a starting point that still needs to be checked and iterated, often through pair programming with the same tool.

The mid-term impact of this transition is hard to anticipate. We will probably get a wide range of cases, from hyper-productive small teams displacing larger but slower ones, to AI-enhanced developers in organisations with uneven adoption quietly enjoying a lot more free time while keeping the same productivity as before.

But how is the senior engineer to get any work done, if they need to babysit the agent and every two minutes accept/reject it's actions? Genuine question. Letting that thing do "whatever" usually means getting insane multiple thousands lines long pull request that will need to be discarded and redone anyway.

Related - how do you get that thing to stop writing comments? If asked not to do so, it will instead put that energy into docstrings, debug logs and what not, poisoning the code for any further "AI" processing.

Stuff like (this is an impression, not an actual output):

   // Simplify process by removing redundand operations
    int sz = 100;
    // Optimized  algorithm, removed mapping:
    return lookup(load(sz));

Most stuff in comments is actively misleading.

Also the horrible desire of writing new code and not reading docs, either in-project or on the web...

For writing ffmpeg invocations or single-screen bash scripts, great thing! For writing programs? Actively harmful

  • Yeah for me the three main issues are: - overly defensive programming. In python that means try except everywhere without catching specific exceptions, hasattr checks, when replacing an approach by a new one adding a whole “backward compatibility” thing in case we need to keep the old approach etc. That leads to obfuscated errors, silent fails, bad values triggering old code etc - plain editing things it is not supposed to. That is “change A into B” and it does “ok I do B but I also removed C and D because they had nothing to do with A” or “I also changed C in E which doesn’t cover all the edge cases but I liked it better” - keep re-implementing logic instead of reusing

>One key point is to use their initial output as a draft, as a starting point that still needs to be checked and iterated, often through pair programming with the same tool.

This matches my experience. It's not useful for producing something that you wouldn't have been able to produce yourself, because you still need to verify the output itself and not just the behavior of the output when executed.

I'd peg this as the most fundamental difference in use between LLMs and deterministic compilers/transpilers/codegens.