← Back to context

Comment by throwawayscrapd

4 days ago

Did you ever consider refactoring the code so that you don't have to do shotgun surgery every time you make this kind of change?

You mean to future proof the code so requirements changes are easy to implement? Yeah, I've seen lots of code like that (some of it written by myself). Usually the envisioned future never materializes unfortunately.

  • I mean given that you've had this problem repeatedly, I'd call it "past-proofing", but I suppose you know your codebase better than I do.

    • There’s always a balance to be struck when avoiding premature consolidation of repeated code. We all face the same issue as osigurdson at some point and the productive responses fall in a range.

      1 reply →

It's a monorepo with backend/frontend/database migrations/protobufs. Could you suggest how exactly should I refactor it so I don't need to make changes in all these parts of the codebase?

  • I wouldn't try to automate the DB part, but much like the protobufs code is generated from a spec, you can generate other parts from a spec. My current company has a schema repo used for both API and kafka type generation.

    This is a case where a monorepo should be a big advantage, as you can update everything with a single change.

    • It's funny, but originally I had written a codegenerator that just reads protobuf and generates/modifies code in other parts. It's been ok experience until you hit another corner case (especially in UI part) and need to spend another hours improving codegenerator. But since after AI coding tools became better I started delegating this part to AI increasingly more, and now with agentic AI tools it became way more efficient than keeping maintaining codegenerator. And you're right about DB part - again, now with task description it's a no brainer to tell it which parts shouldn't be touched.

At this point why spend 5 hours refactoring when I can spend 5 minutes shot gunning the changes in?

At the same time refactoring probably takes 10 minutes with AI.

A lot of that is inherent in the framework. eg Java and Go spew boilerplate. LLMs are actually pretty good at generating boilerplate.

See, also, testing. There's a lot of similar boilerplate for testing. Giving LLMs a list of "Test these specific items, with this specific setup, and these edge cases." I've been pretty happy writing a bulleted outline of tests and getting ... 85% complete code back? You can see a pretty stark line in a codebase I work on where I started doing this vs comprehensiveness of testing.

  • With both Python code and TS, LLMs are in my experience very good at generating test code from e.g. markdown files of test cases.