When you get some abstraction working you concretize it in something deterministic, or sort of “cache” that knowledge bit (aka write me a function, class, library, whatever). In the future, the nondeterministic path now has a deterministic piece to lean on as it explores the problem space. Rinse, repeat, eventually you have a mostly deterministic system now. Leave flexibility in space where you need that nondeterminism.
Rather than telling the LLM "loop through these files", tell it "write a script to loop through these files", then hard-code that script somewhere rather than generating it each time.
When you get some abstraction working you concretize it in something deterministic, or sort of “cache” that knowledge bit (aka write me a function, class, library, whatever). In the future, the nondeterministic path now has a deterministic piece to lean on as it explores the problem space. Rinse, repeat, eventually you have a mostly deterministic system now. Leave flexibility in space where you need that nondeterminism.
Rather than telling the LLM "loop through these files", tell it "write a script to loop through these files", then hard-code that script somewhere rather than generating it each time.
a guess but i think they mean take the orchestration prompt and prompt yet another llm to turn that prompt into code..?