← Back to context

Comment by K0balt

19 days ago

That’s where it really shines. I have a backlog of small projects (-1-2kLOC type state machines , sensors, loggers) and instead of spending 2-3 days I can usually knock them out in half a day. So they get done. On these projects, it is an infinity improvement because I simply wouldn’t have done them, unable to justify the cost.

But on bigger stuff, it bogs down and sometimes I feel like I’m going nowhere. But it gets done eventually, and I have better structured, better documented code. Not because it would be better structured and documented if I left it to its ow devices, but rather it is the best way to get performance out of LLM assistance in code.

The difference now is twofold: First, things like documentation are now -effortless-. Second, the good advice you learned about meticulously writing maintainable code no longer slows you down, now it speeds you up.

I’ve developed a similar sense about maintainability becoming more important with LLMs. I have no hard data. Just feels that way.

Can you elaborate a little bit on how you get the LLM to produce maintainable code? Any tricks other than better prompting?

  • Just explicitly prioritize separation of concerns, with strict API modularity between them. Breaking everything into single concern chunks with good APIs. It’s less about re-use, and more about containment . Documentation, and testability. Also invest more time in ensuring that your data structures are a mirror of the solution space. That will pay huge dividends with better code.

    These things have alerted been true, but now they also enable AI development so instead of accumulating technical debt for expedience sake, we get paid an efficiency subsidy in productivity for doing it right. ( or rather for herding the gerbils to do it right)