Comment by neomantra
8 hours ago
Thanks for sharing this — I appreciate your motivation in the README.
One suggestion, which I have been trying to do myself, is to include a PROMPTS.md file. Since your purpose is sharing and educating, it helps others see what approaches an experienced developer is using, even if you are just figuring it out.
One can use a Claude hook to maintain this deterministically. I instruct in AGENTS.md that they can read but not write it. It’s also been helpful for jumping between LLMs, to give them some background on what you’ve been doing.
In this case, instead of a prompt I wrote a specification, but later I had to steer the models for hours. So basically the prompt is the sum of all such interactions: incredibly hard to reconstruct to something meaningful.
I've only just started using it but the ralph wiggum / ralph loop plugin seems like it could be useful here.
If the spec and/or tests are sufficiently detailed maybe you can step back and let it churn until it satisfies the spec.
Isn't the "steering" in the form of prompts? You note "Even if the code was generated using AI, my help in steering towards the right design, implementation choices, and correctness has been vital during the development." You are a master of this, let others see how you cook, not just taste the sauce!
I only say this as it seems one of your motivations is education. I'm also noting it for others to consider. Much appreciation either way, thanks for sharing what you did.
This steering is the main "source code" of the program that you wrote, isn't it? Why throw it away. It's like deleting the .c once you have obtained the .exe
It's more noise than signal because it's disorganized, and hard to glean value from it (speaking from experience).
Doesn’t Claude Code allow to just dump entire conversations, with everything that happened in them?
All sessions are located in the `~/.claude/projects/foldername` subdirectory.
3 replies →