Comment by prodigycorp
5 days ago
LLMs are generally bad at writing non-noisy prompts and instructions. It's better to have it write instructions post hoc. For instance, I paste this prompt into the end of most conversations:
If there’s a nugget of knowledge learned at any point in this conversation (not limited to the most recent exchange), please tersely update AGENTS.md so future agents can access it. If nothing durable was learned, no changes are needed. Do not add memories just to add memories.
Update AGENTS.md **only** if you learned a durable, generalizable lesson about how to work in this repo (e.g., a principle, process, debugging heuristic, or coding convention). Do **not** add bug- or component-specific notes (for example, “set .foo color in bar.css”) unless they reflect a broader rule.
If the lesson cannot be stated without referencing a specific selector or file, skip the memory and make no changes. Keep it to **one short bullet** under an appropriate existing section, or add a new short section only if absolutely necessary.
It hardly creates rules, but when it does, it affects rules in a way that positively affects behavior. This works very well.
Another common mistake is to have very long AGENTS.md files. The file should not be long. If it's longer than 200 lines, you're certainly doing it wrong.
> If nothing durable was learned, no changes are needed.
Off topic, but oh my god if you don't do this, it will always do the thing you conditionally requested it to do. Not sure what to call this but it's my one big annoyance with LLMs.
It's like going to a sub shop and asking for just a tiny bit of extra mayo and they heap it on.
Llms generally seem trained with the assumption that if you mention it, you want it.
I don't think the instruction following benches test for this much and I don't know how you'd measure it well