Comment by bambax

21 hours ago

> Unlike their human counterparts who would and escalate a requirements gap to product when necessary, coding assistants are notorious for burying those requirement gaps within hundreds of lines of code

This is the kind of argument that seems true on the surface, but isn't really. An LLM will do what you ask it to do! If you tell it to ask questions and poke holes into your requirements and not jump to code, it will do exactly that, and usually better than a human.

If you then ask it to refactor some code, identify redundancies, put this or that functionality into a reuseable library, it will also do that.

Those critiques of coding assistants are really critiques of "pure vibe coders" who don't know anything and just try to output yet another useless PDF parsing library before they move on to other things.

I hear your pushback, but that I think that's his point:

Even seasoned coders using plan mode are funneled towards "get the code out" when experience shows that the final code is a tiny part of the overall picture.

The entire experience should be reorganized that the code is almost the afterthought, and the requirements, specs, edge cases, tests, etc are the primary part.

  • This is always been the businessman's dream to write requirements and then coding becomes a mindless work but requirements and specs can never cover every small detail. Code itself is the spec but Business people just dont wanna write it. if you handle all edge cases and limitation in the spec, and then do the same in the code, you are just writing code twice.

    This also completely ignores the fact that PMs and Business teams are generating specs by AI too, so its slop covered by more slop and has no actual specific details until you reach the code level.

It will not in fact always do what you ask it because it lacks any understanding, though the chat interface and prolix nature of LLMs does a good job at hiding that.

It’s like in Anthropic’s own experiment. People who used AI to do their work for them did worse than the control group. But people who used AI to help them understand the problem, brainstorm ideas, and work on their solution did better.

The way you approach using AI matters a lot, and it is a skill that can be learned.

It not just about asking questions, its about asking right questions. Can AI pushback and decline a completely stupid request? PMs & Business people dont really know the limitation of the software and almost always think adding more features is better. With AI you will be shipping 90% of the features which were never needed thus adding to bloat & making the product go off the rails quicker.