← Back to context

Comment by fny

11 hours ago

Secret: "compile" that orchestration prompt. Determinism is solved by turning prompts into code that can in turn run agents or run code or both.

Everyone misses this pattern with skills: you can just drop code alongside a SKILL.md to guarantee certain behaviors, but for some reason everyone's addicted to writing prompts. You don't even need to build a CLI. A simple skill.py with tasks does it. You can even have helpers that call `claude -p`!

Exactly this, I tend to work this way. I built an ingestion pipeline to pull concepts out of a novel using Qwen and push them into falkordb this way

Could you elaborate what does "compiling orchestration prompt" mean?

  • When you get some abstraction working you concretize it in something deterministic, or sort of “cache” that knowledge bit (aka write me a function, class, library, whatever). In the future, the nondeterministic path now has a deterministic piece to lean on as it explores the problem space. Rinse, repeat, eventually you have a mostly deterministic system now. Leave flexibility in space where you need that nondeterminism.

  • Rather than telling the LLM "loop through these files", tell it "write a script to loop through these files", then hard-code that script somewhere.

    • The models will eventually be able to know that they need to do that to get the thing done from natural language

  • a guess but i think they mean take the orchestration prompt and prompt yet another llm to turn that prompt into code..?