Comment by tomlis
10 hours ago
The deterministic mixed with LLM approach has been great for me so far. I've been getting a lot of the gains the "do it all with AI" people have been preaching but with far fewer pitfalls. It's sometimes not as fluid as what you sometimes see with the full-LLM-agent setups but that's perfectly acceptable to me and I handle those issues on a case-by-case basis.
I'd argue that the moment one cares about accuracy and blast radius, one would natural want to reduce error compounding from a combination of LLM calls (non deterministic) and it's very natural to defer to well tested determinist tools.
Do one thing and do it well building blocks and the LLM acts a translation layer with reasoning and routing capabilities. Doesn't matter if it's one or an orchestrated swarm of agents.
https://alexhans.github.io/posts/series/evals/error-compound...
Yeah. One of the patterns I've fallen into looks a bit like this:
1. I have some new task I need/want to do.
2. For whatever reason, it's not something I want to do myself if I can avoid it.
3. Have the agent do it the first few times.
4. After those first few iterations, think about if it's something where the variability in the number of steps needed to complete the task is small enough to just put into a small script or service. If it is, either write the code myself or ask the agent to create draft code based on its own observations of how it did the task those first few times. If it's not, just keep having the agent do it.
5. A good chunk of the time, most of the task has low variability in what it needs to do except for just one portion. In that case, just use deterministic code for all areas of the program except the high variability area.
Probably a better word than "variability" for what I'm talking about but I think you get the idea. Spend a lot of tokens upfront so the tokens used later can be minimized when possible.
EDIT: Formatting.
Yeah, the idea is clear. You're "integrating early" and "failing fast" and once you've understood enough about the problem you can design and optimize the right custom tool to make it more accurate, consistent, cost-effective.
To be fair, it's a micro approach of the way to approach projects rapidly where instead of trying to design too much upfront, identify what are the real value producing goals, the risks in the middle that you can foresee and get hands on in a time-boxed manner to de-risk the individual points or understand what's not possible. Then you can actually come up with the right explanations for the design.