Comment by antirez
23 days ago
In this case, instead of a prompt I wrote a specification, but later I had to steer the models for hours. So basically the prompt is the sum of all such interactions: incredibly hard to reconstruct to something meaningful.
This steering is the main "source code" of the program that you wrote, isn't it? Why throw it away. It's like deleting the .c once you have obtained the .exe
It's more noise than signal because it's disorganized, and hard to glean value from it (speaking from experience).
I wasn’t exactly suggesting this. The source code (including SVG or DOCX or HTMl+JS for document work) is the primary ground truth which the LLM modifies. Humans might modify it too. This ground truth is then rendered (compiled, visualized) to the end product.
The PROMPTS.md is communication metadata. Indeed, if you fed the same series of prompts freshly, the resultant ground truths might not make sense because of the stochastic nature of LLMs.
Maybe “ground truth” isn’t exactly the right word, but it is the consistent, determined basis which formed from past work and will evolve with future work.
> because of the stochastic nature of LLMs.
But is this "stochastic nature" inherent to the LLM? Can't you make the outputs deterministic by specifying a version of the weights and a seed for the random number generator?
Your vibe coding log (i.e. your source code) may start like this:
Notice that the first two lines may be added automatically by the system and you don't need to write or even see them.
2 replies →
I've only just started using it but the ralph wiggum / ralph loop plugin seems like it could be useful here.
If the spec and/or tests are sufficiently detailed maybe you can step back and let it churn until it satisfies the spec.
Isn't the "steering" in the form of prompts? You note "Even if the code was generated using AI, my help in steering towards the right design, implementation choices, and correctness has been vital during the development." You are a master of this, let others see how you cook, not just taste the sauce!
I only say this as it seems one of your motivations is education. I'm also noting it for others to consider. Much appreciation either way, thanks for sharing what you did.
Doesn’t Claude Code allow to just dump entire conversations, with everything that happened in them?
All sessions are located in the `~/.claude/projects/foldername` subdirectory.
Doesn't it lose prompts prior to the latest compaction?
4 replies →
aider keeps a log of this, which is incredibly useful.