Comment by ai_fry_ur_brain
7 hours ago
The key is to not run LLMs in loops. This trend of agentic frameworks is silly, and mostly exists to make LLM companies more revenue. An LLM is mostly useless but is much more useful and reliable with one shot tooling.
I have a suite or tools ive built for myself on top of the openrouter api for very specific tasks. Press button amd LLM does (one) useful thing, not press button and let LLM run tool calls in a loop for 5 minutes and hope it does things in the correct order.
If multiple tools need to be called to do a useful thing, I will chain those together deterministically in my code. This is much more reliable as I can check the output of A before proceeding to task B or C, also its more time and token efficient. Agentic loops are a huge scam.
Often I find LLMs doing multiple steps to achieve some goals (e.g. do certain operations against JIRA or Gitlab), and if the LLM work seems useful, I instruct it to create a tool to achieve the task more directly and revise skill data to make use of the tool.
Granted I've let it mostly vibecode those tools, so they might be garbage. I should perhaps have it do a refactoring round to make more composable tools..
You are completely wrong, but one might get that impression from not using SOTA models in the Sonnet ballpark.
I think both preceding comments are a bit too strongly worded. I’m experimenting as well with pairing deterministic programming with llm use in a similar fashion and find that it allows you to squeeze more out of smaller models than with llm-only agentic loops. It is also no question for me that the large SOTA models can do way more in llm-only agentic loops with less hassle and pre-work. If you discount the hassle of actually running them, that is. So I guess it depends a bit on what your objective is.