Comment by wvenable

4 hours ago

I've been using it to do big refactors are large changes that I would simply avoid because, before, the benefits don't outweigh the costs of the doing it. I think half the problem people have is just using AI for the wrong stuff.

I don't see why it doesn't help with reviewing, testing, or refining code either. One of the advantages I find is that an LLM "thinks" differently from me so it'll find issues that I don't notice or maybe even know about. I've certainly had it develop entire test harnesses to ensure pre/post refactoring results are the same.

That said, I have "held it wrong" and had it done the fun stuff instead and that felt bad. So I just changed how I used it.

I read a lot of AI generated code these days. It makes really bad mistakes (even when the nature of the change is a refactor). I've tried out a few different tools and methodologies, but I haven't escaped the need to babysit the "agent." If I stepped aside, it would create more work for me and others on the backend of our workflow.

I read anecdotes of teams that push through AI-driven changes as fast as possible with awe. Surely their AIs are no more capable than the ones I'm familiar with.

  • I read all the code and it sometimes make mistakes -- but I wouldn't call it really bad. And often merely pointing it out will get a correction. Sometimes it is funny. It's not perfect but nothing is perfect. I have noticed that the quality seems to be improving.

    I still think whether you see sustained value or not depends a lot on your workflow -- in what you choose to do or decide and what you let it choose to do or decide.

    I agree with you that this idea of just pushing out AI code -- especially code written from scratch -- by an AI sounds like a disaster waiting to happen. But honestly a lot of organizations let a lot of crappy code into their code-base long before AI came long. Those organizations are just doing the same now at scale. AI didn't change the quality, it just changed the quantity.