Comment by jacobr1
5 days ago
LLMs start to bog down with it at a certain point too. For a couple of my side projects, I decided to let things rip and now worry about code structure at all. After a certain point some of the changes I wanted to make just started either failing work racking up large bills. It would try to make a change, run tests, realize it broke something somewhere else, try to fix that, cause another issue. Undo the original thing, fix the new issue, maybe try to refactor, partially, fail, revert that, decide to make the tests pass by removing the tests! and then keep some broken version of fix. With a few similar cycles of that on repeat as well.
Deliberately telling it how to rethink the structure, refactor first, then seperate out components fixed things.
IF LLMs stayed at the current level, I would expect llm-aided-coders to learn how to analyze and address situations like this. However, I do expect models to be better able to A) avoid these kinds of situations with better design up front or reflection when making changes and B) identify more systematic patterns and reason about the right way to structure things. Basically ambiently detecting "code smells."
You can already see improvements both from newer models and from prompt engineering coming from the agentic tools.
No comments yet
Contribute on Hacker News ↗