← Back to context

Comment by akdev1l

1 day ago

> Carefully test your markdown scripts interactively first

How does it help?

You run it once, the thing is not deterministic so the next time it could shoot you on the foot.

In practice after using this for real-world test suites and evaluations, the results with Claude Code if you do this sensibly are remarkably consistent. That's because you can still write the deterministic parts as the `./run_tests.sh` bash script (or `run_tests.py` etc).

So you're using the appropriate tools for the task at hand embedded within both traditional scripts and markdown scripts.

Examples: - A bash script summarizes text files from a path in a loop - A markdown script runs `./test/run_tests.py` and summarizes the results.

Tools like Claude code combined with executable scripts and pipes open up a genuinely new way of doing tasks that are traditionally hard with scripting languages alone. I expect we will see a mix of borth approaches where each gets used based on its strengths, as we're seeing with application development too.

It is a new world and we're all figuring this out.

[Edit for style]

  • I mean in such case it is equivalent to like `do-something | llm “summarize the thing”`

    Personally I see “prompt scripting” as strictly worse than code

    cannot even modify some part of the prompt without being sure that there won’t be random side effects

    And from what I’ve seen these prompts can(and do tend to) grow into possibly hundreds of lines as they become more specific and people try to “patch” the edge cases.

    It ends up being like code but strictly worse.

    • One of the advantages of using executable Markdown files with pipe support is that it allows you to create composable building blocks that can be chained together.

      So you can build individual prompt-based scripts (format.md, summarize.md etc.) that are each small, simple and focused on a single task. Then you can chain those prompt scripts together with regular command line tools and bash scripts.

      I find that approach quite powerful, and it helps overcome the need for massive prompts. They can also be invoked from within Claude Code in interactive mode.

Is it possible to pin a model + seed for deterministic output?

  • Even if the LLM theoretically supported this, it's a big leap of faith to assume that all models on all their CPUs are always perfectly synced up, that there are never any silently slipstreamed fixes because someone figured out how to get the model to emit bad words or blueprints for a neutron bomb, etc.

    • Most of the cloud providers give you a choice of two ways of referring to models - either a specific dated model id (like the example above), or a shorter alias which generally points to the latest release of that model, and is more likely to change over time.

      We add in some additional flags for `--opus`, `--sonnet`, `--haiku` as shortcuts to abstract this away even further if you want to just use the latest model releases.

      Example to run haiku latest via Vercel AI Gateway with unified billing and cross-cloud fallback between providers.

      `claude-run --haiku --vercel task.md`

      AWS Bedrock at least appears to be pretty steady when you pin a model now, according to our own evals anyway. Earlier on there was some performance degradation, at peak load etc.

The question is how reliable does it need to be? Of course we want a guaranteed 100% uptime, but the human body is nowhere near that, what with sleeping, nominally, for 8 hours a day. That's 66% uptime.

Anyway, it succeeds enough for some to just wear steel toed boots.