← Back to context

Comment by dc_giant

8 hours ago

Mind sharing the instructions you give Claude to go for minimal code changes etc?

I regularly prompt and re-prompt the clanker with esoteric terms like "subtractive changes", "create by removing" and more common phrases like "make the change easy, then make the easy change", "yagni", and "vertical slices", and "WET code is desirable".

It mostly works. CC's plan mode creates a plan by cleaning up first, then defining narrow, integrated steps. Mentioning "subtractive" and "yagni" appears to be a reliable enough way for an LLM to choose a minimal path.

To my mind these instructions remain incantations and I feel like an alchemist of old.

  • Was just listening to the Lenny’s Podcast interview with Simon Willison, who mentioned another such incantation: red/green TDD. The model knows what this means and it just does it, with a nice bump in code quality apparently.

    I’m trying out another, what I call the principle of path independence. It’s the idea that the code should reflect only the current requirements, and not the order in which functionality was added — in other words, if you should decide to rebuild the system again from scratch tomorrow, the code should look broadly similar to its current state. It sort of works even though this isn’t a real thing that’s in its training data.

I often say to Claude "you're doing X when I want Y, how can I get you to follow the Y path without fail" and Claude will respond with "Edit my claude.md to include the following" which I then ask Claude to do.

  • Not sure this is a great idea. The model only internalized what it was trained on and writing prompts/context for itself isn't part of that. I try to keep my context as clean as possible, mostly today's models seem smart/aligned enough to be steered by a couple of keywords.