← Back to context

Comment by onion2k

19 hours ago

if you don't plan perfectly, you'll have to start over from scratch if anything goes wrong

This is my experience too, but it's pushed me to make much smaller plans and to commit things to a feature branch far more atomically so I can revert a step to the previous commit, or bin the entire feature by going back to main. I do this far more now than I ever did when I was writing the code by hand.

This is how developers should work regardless of how the code is being developed. I think this is a small but very real way AI has actually made me a better developer (unless I stop doing it when I don't use AI... not tried that yet.)

I do this too. Relatively small changes, atomic commits with extensive reasoning in the message (keeps important context around). This is a best practice anyway, but used to be excruciatingly much effort. Now it’s easy!

Except that I’m still struggling with the LLM understanding its audience/context of its utterances. Very often, after a correction, it will focus a lot on the correction itself making for weird-sounding/confusing statements in commit messages and comments.

  • > Very often, after a correction, it will focus a lot on the correction itself making for weird-sounding/confusing statements in commit messages and comments.

    I've experienced that too. Usually when I request correction, I add something like "Include only production level comments, (not changes)". Recently I also added special instruction for this to CLAUDE.md.

We're learning the lessons of Agile all over again.

  • We're learning how to be an engineer all over again.

    The authors process is super-close what we were taught in engineering 101 40 years ago.

    • It's after we come down from the Vibe coding high that we realize we still need to ship working, high-quality code. The lessons are the same, but our muscle memory has to be re-oriented. How do we create estimates when AI is involved? In what ways do we redefine the information flow between Product and Engineering?

    • I'm currently having Claude help me reverse engineer the wire protocol of a moderately expensive hardware device, where I have very little data about how it works. You better believe "we" do it by the book. Large, detailed plan md file laying out exactly what it will do, what it will try, what it will not try, guardrails, and so on. And a "knowledge base" md file that documents everything discovered about how the device works. Facts only. The knowledge base md file is 10x the size of the code at this point, and when I ask it to try something, I ask Claude to prove to me that our past findings support the plan.

      Claude is like an intern coder-bro, eager to start crushin' it. But, you definitely can bring Claude "down to earth," have it follow actual engineering best practices, and ask it to prove to you that each step is the correct one. It requires careful, documented guardrails, and on top of it, I occasionally prompt it to show me with evidence how the previous N actions conformed to the written plan and didn't deviate.

      If I were to anthropomorphize Claude, I'd say it doesn't "like" working this way--the responses I get from Claude seem to indicate impatience and a desire to "move forward and let's try it." Obviously an LLM can't be impatient and want to move fast, but its training data seem to be biased towards that.

      1 reply →

    • I always feels like I'm in a fever dream when I hear about AI workflows. A lot of stuff is what I've read from software engineering books and articles.

LLMs are really eager to start coding (as interns are eager to start working), so the sentence “don’t implement yet” has to be used very often at the beginning of any project.

  • Most LLM apps have a 'plan' or 'ask' mode for that.

    • I find that even then I often need to be clear that i'm just asking a question and don't want them running off to solve the larger problem.

Developers should work by wasting lots of time making the wrong thing?

I bet if they did a work and motion study on this approach they'd find the classic:

"Thinks they're more productive, AI has actually made them less productive"

But lots of lovely dopamine from this false progress that gets thrown away!

  • > Developers should work by wasting lots of time making the wrong thing?

    Yes? I can't even count how many times I worked on something my company deemed was valuable only for it to be deprecated or thrown away soon after. Or, how many times I solved a problem but apparently misunderstood the specs slightly and had to redo it. Or how many times we've had to refactor our code because scope increased. In fact, the very existence of the concepts of refactoring and tech debt proves that devs often spend a lot of time making the "wrong" thing.

    Is it a waste? No, it solved the problem as understood at the time. And we learned stuff along the way.

  • Developers should work by wasting lots of time making the wrong thing?

    Yes. In fact, that's not emphatic enough: HELL YES!

    More specifically, developers should experiment. They should test their hypothesis. They should try out ideas by designing a solution and creating a proof of concept, then throw that away and build a proper version based on what they learned.

    If your approach to building something is to implement the first idea you have and move on then you are going to waste so much more time later refactoring things to fix architecture that paints you into corners, reimplementing things that didn't work for future use cases, fixing edge cases than you hadn't considered, and just paying off a mountain of tech debt.

    I'd actually go so far as to say that if you aren't experimenting and throwing away solutions that don't quite work then you're only amassing tech debt and you're not really building anything that will last. If it does it's through luck rather than skill.

    Also, this has nothing to do with AI. Developers should be working this way even if they handcraft their artisanal code carefully in vi.

    • >> Developers should work by wasting lots of time making the wrong thing?

      > Yes. In fact, that's not emphatic enough: HELL YES!

      You do realize there are prior research and well tested solutions for a lot of things. Instead of wasting time making the wrong thing, it is faster to do some research if the problem has already been solved. Experimentation is fine only after checking that the problem space is truly novel or there's not enough information around.

      It is faster to iterate in your mental space and in front of a whiteboard than in code.

    • I've been doing this a long times and I've never had to do that and have delivered multiple successful products used by millions of users. Some of which were used for years after we stopped doing any sort of even maintaining with no bugs, problems or crashes.

      There are only a few software architecture patterns because there's only a few ways to solve code architecture problems.

      If you're getting your initial design so wrong that you have to start again from scratch midway through, that shows a lack of experience, not insight.

      You wouldn't know this, but I'm also a bit of an expert at refactoring, having saved several projects which had built up so much technical debt the original contractors ran away. I've regularly rewritten 1,000s if not 10,000s of line into 100s of lines of code.

      So it's especially galling to be told not only that somehow all code problems are unique (they almost never are), but my code is building technical debt (it's not, I solve that stuff).

      Most problems are solved, and you should be using other people's solutions to solve the problems you face.