← Back to context

Comment by jumploops

7 hours ago

This is neat, and matches an observation I saw with early Claude Code usage:

Sonnet would often call tools quickly to gather more context, whereas Opus would spend more time reasoning and trying to solve a problem with the context it had.

This led to lots of duplicated functions and slower development, though the new models (GPT-5.5 and Opus 4.6) seem to suffer from this less.

My takeaway was that “dumber” (i.e. smaller) models might be better as an agentic harness, or at least feasibly cheaper/faster to run for a large swath of problems.

I haven’t found Gemini to be particularly good at long horizon tool calling though. It might be interesting to distill traces from real Codex or Claude code sessions, where there’s long chains of tool calls between each user query.

Personally, I’d love a slightly larger model that runs easily on an e.g. 32GB M2 MBP, but with tool calling RL as the primary focus.

Some of the open weight models are getting close (Kimi, Qwen), but the quantization required to fit them on smaller machines seems to drop performance substantially.

The key is to not run LLMs in loops. This trend of agentic frameworks is silly, and mostly exists to make LLM companies more revenue. An LLM is mostly useless but is much more useful and reliable with one shot tooling.

I have a suite or tools ive built for myself on top of the openrouter api for very specific tasks. Press button amd LLM does (one) useful thing, not press button and let LLM run tool calls in a loop for 5 minutes and hope it does things in the correct order.

If multiple tools need to be called to do a useful thing, I will chain those together deterministically in my code. This is much more reliable as I can check the output of A before proceeding to task B or C, also its more time and token efficient. Agentic loops are a huge scam.

  • Often I find LLMs doing multiple steps to achieve some goals (e.g. do certain operations against JIRA or Gitlab), and if the LLM work seems useful, I instruct it to create a tool to achieve the task more directly and revise skill data to make use of the tool.

    Granted I've let it mostly vibecode those tools, so they might be garbage. I should perhaps have it do a refactoring round to make more composable tools..

  • You are completely wrong, but one might get that impression from not using SOTA models in the Sonnet ballpark.

    • I think both preceding comments are a bit too strongly worded. I’m experimenting as well with pairing deterministic programming with llm use in a similar fashion and find that it allows you to squeeze more out of smaller models than with llm-only agentic loops. It is also no question for me that the large SOTA models can do way more in llm-only agentic loops with less hassle and pre-work. If you discount the hassle of actually running them, that is. So I guess it depends a bit on what your objective is.

> and matches an observation I saw with early Claude Code

> though the new models (GPT-5.5 and Opus 4.6) seem to suffer from this less

> My takeaway was that

> haven’t found Gemini to be

For the love of all that's holy, folks please stop investing your time to fill in the gaps that the Slop Corporations are leaving wide open in their "tooling". Why should you strain yourself in an attempt to "make it work" one way or another? Google, MS, Meta, OpenAI etc. are all now subtly pushing to call their tooling "Intelligence" (not even Artificial Intelligence), so why is it not intelligent? Why does it not work? 1T+ investments and still we should think of best magic chants and configurations to make the slop generators produce half-valid output? All while some of the tech leaders are openly threatening to subdue us in their weird visions of "civilisation" ? We have a better use for our superior brains, let's not denigrate ourselves into being helpless helpers to the magic oracle (if at least it was some magic oracle!)