Comment by Aurornis

2 days ago

> Making a prompt library useful requires iteration. Every time the LLM is slightly off target, ask yourself, "What could've been clarified?" Then, add that answer back into the prompt library.

I'm far from an LLM power user, but this is the single highest ROI practice I've been using.

You have to actually observe what the LLM is trying to do each time. Simply smashing enter over and over again or setting it to auto-accept everything will just burn tokens. Instead, see where it gets stuck and add a short note to CLAUDE.md or equivalent. Break it out into sub-files to open for different types of work if the context file gets large.

Letting the LLM churn and experiment for every single task will make your token quota evaporate before your eyes. Updating the context file constantly is some extra work for you, but it pays off.

My primary use case for LLMs is exploring code bases and giving me summaries of which files to open, tracing execution paths through functions, and handing me the info I need. It also helps a lot to add some instructions for how to deliver useful results for specific types of questions.

I'm with you on that, but I have to say I have been doing that aggressively, and it's pretty easy for Claude Code at least to ignore the prompts, commands, Markdown files, README, architecture docs, etc.

I feel like I spend quite a bit of time telling the thing to look at information it already knows. And I'm talking about when I HAVE actually created various documents to use and prompts.

As a specific example, it regularly just doesn't reference CLAUDE.md and it seems pretty random as to when it decides to drop that out of context. That's including right at session start when it should have it fresh.

  • > and it's pretty easy for Claude Code at least to ignore the prompts, commands, Markdown files, README, architecture docs, etc.

    I would agree with that!

    I've been experimenting with having Claude re-write those documents itself. It can take simple directives and turn them into hierarchical Markdown lists that have multiple bullet points. It's annoying and overly verbose for humans to read, but the repetition and structure seems to help the LLM.

    I also interrupt it and tell it to refer back to CLAUDE.md if it gets too off track.

    Like I said, though, I'm not really an LLM power user. I'd be interested to hear tips from others with more time on these tools.

  • > it seems pretty random as to when it decides to drop that out of context

    Overcoming this kind of nondeterministic behavior around creating/following/modifying instructions is the biggest thing I wish I could solve with my LLM workflows. It seems like you might be able to do this through a system of Claude Code hooks, but I've struggled with finding a good UX for maintaining a growing and ever-changing collection of hooks.

    Are there any tools or harnesses that attempt to address this and allow you to "force" inject dynamic rules as context?

  • Agreed here. A key theme, which isn’t terribly explicit in this post, is that your codebase is your context.

    I’ve found that when my agent flies off the rails, it’s due to an underlying weakness in the construction of my program. The organization of the codebase doesn’t implicitly encode the “map”. Writing a prompt library helps to overcome this weakness, but I’ve found that the most enduring guidance comes from updating the codebase itself to be more discoverable.

    • > my agent flies off the rails

      Which, I've had it delete the entire project including .git out of "shame", so my claude doesn't get permission to run rm anymore.

      Codex has fewer levers but it's deleted my entire project twice now.

      (Play with fire, you're gonna get burnt.)

      3 replies →

  • Because, in my experience/conspiracy theory, the model providers are trying to make the models function better without having to have these kinds of workarounds. And so there's a disconnect where folks are adding more explicit instructions and the models are being trained to effectively ignore them under the guise of using their innate intuition/better learning/mixture of experts.

> Every time the LLM is slightly off target, ask yourself, "What could've been clarified?

Better than that, ask the LLM. Better than that, have the LLM ask itself. You do still have make sure it doesn't go off the rails, but the LLM itself wrote this to help answer the question:

### Pattern 10: Student Pattern (Fresh Eyes)

*Concept:* Have a sub-agent read documentation/code/prompts "as a newcomer" to find gaps, contradictions, and confusion points that experts miss.

*Why it works:* Developers write with implicit knowledge they don't realize is missing. A "student" perspective catches assumptions, undefined terms, and inconsistencies.

*Example prompt:* ``` Task: "Student Pattern Review

Pretend you are a NEW AI agent who has never seen this codebase. Read these docs as if encountering them for the first time: 1. CLAUDE.md 2. SUB_AGENT_QUICK_START.md

Then answer from a fresh perspective:

## Confusion Points - What was confusing or unclear on first read? - What terms are used without explanation?

## Contradictions - Where do docs disagree with each other? - What's inconsistent?

## Missing Information - What would a new agent need to know that isn't covered?

## Recommendations - Concrete edits to improve clarity

Be honest and critical. Include file:line references." ```

*Uses cases:* Before finalizing new documentation, evaluating prompts for future Agents.