← Back to context

Comment by dboon

2 days ago

AI programming, for me, is just a few simple rules:

1. True vibe coding (one-shot, non-trivial, push to master) does not work. Do not try it.

2. Break your task into verifiable chunks. Work with Claude to this end.

3. Put the entire plan into a Markdown file; it should be as concise as possible. You need a summary of the task; individual problems to solve; references to files and symbols in the source code; a work list, separated by verification points. Seriously, less is more.

4. Then, just loop: Start a new session. Ask it to implement the next phase. Read the code, ask for tweaks. Commit when you're happy.

Seriously, that's it. Anything more than that is roleplaying. Anything less is not engineering. Keep a list in the Markdown file of amendments; if it keeps messing the same thing up, add one line to the list.

To hammer home the most important pieces:

- Less is more. LLMs are at their best with a fresh context window. Keep one file. Something between 500 and 750 words (checking a recent one, I have 555 words / 4276 characters). If that's not sufficient, the task is too big.

- Verifiable chunks. It must be verifiable. There is no other way. It could be unit tests; print statements; a tmux session. But it must be verifiable.

100% concur with this as owner of multiple 20k+ LOC repos with between 10-30% unmodified AI code in production

If you treat it like a rubber duck it’s magic

If you think the rubber duck is going to think for you then you shouldn’t even start with them.

  • > 10-30% unmodified AI code in production

    That is an interesting metric but I think it is not that important.

    I would be careful with (AI-generated) code that no one at the team understands well. If that kind of code is put into production, it might become a source of dragging technical debt that no one is able to properly address.

    In my opinion, putting AI-generated code to production is okay, as long it has been reviewed and there is a human who can understand it well, debug it and fix it if needed.

    Or, alternatively, if it is a throwaway code that does not need to be understood well and no one cares about its quality or maintainability because it would not need to be maintained in the first place.

    • We understand it perfectly, which is why it went to production

      I didn’t say it wasn’t reviewed, I said it was unmodified

> it should be as concise as possible

What’s more concise than code? From my experience, by the time I’ve gotten an English with code description accurate enough for an agent I could have done it myself. Typing isn’t a hard part.

LLMs/agents have many other uses, but if you’re not offloading your thinking you’re not really going any faster wrt letting them write code via a prompt.

  • I find it quite interesting; there seems to be a set of AI enthusiasts who heavily offload thinking onto the LLM. There has to be difference in how they function, as I find as soon as I drift into letting the LLM think for me, productivity plummets.

  • > What’s more concise than code?

    The word "Tetris" is significantly more concise than the source code for Tetris.

    "Create a Tetris clone" is a valid four-word prompt. It's not going to produce a perfect one shot, but it'll get you 90% of the way there.

    > I could have done it myself. Typing isn’t a hard part.

    No, but it is slow. Claude can put together Tetris in 5 minutes. I most definitely cannot.

    • A traditional programming language still wins there. "git clone $TETRIS_CLONE_REPO" is fewer words, gets you 100% of the way, and only takes seconds to produce the result.

      But the topic at hand is about novel problems. Can you describe your novel solution to an LLM in a natural language with less effort than a programming language that is already designed for describing novel solutions as clearly and concisely as possible?