Comment by InsideOutSanta
19 days ago
My approach:
1. Have the LLM write code based on a clear prompt with limited scope 2. Look at the diff and fix everything it got wrong
That's it. I don't gain a lot in velocity, maybe 10-20%, but I've seen the code, and I know it's good.
Same. Small units if work, iterate in it till it's right, commit it, push it, then do the next increment of work. It's how I've always worked like that, except now, I sometimes let someone else figure the exact API calls (I'm still learning react, but Claude helps get the basics in place for me). If the AI just keeps screwing up, I'll grab the wheel and do it myself. It sometimes helps me get things going, but it hasn't been a huge increase in productivity, but I'm not paying the bill so whatever.
so is the 10-20% in velocity worth the money and the process-complexity added? I'm assuming you're measuring your own velocity, not your team's, since that includes time to review and deploy etc.