Show HN: Needle: We Distilled Gemini Tool Calling into a 26M Model

16 hours ago (github.com)

Hey HN, Henry here from Cactus. We open-sourced Needle, a 26M parameter function-calling (tool use) model. It runs at 6000 tok/s prefill and 1200 tok/s decode on consumer devices.

We were always frustrated by the little effort made towards building agentic models that run on budget phones, so we conducted investigations that led to an observation: agentic experiences are built upon tool calling, and massive models are overkill for it. Tool calling is fundamentally retrieval-and-assembly (match query to tool name, extract argument values, emit JSON), not reasoning. Cross-attention is the right primitive for this, and FFN parameters are wasted at this scale.

Simple Attention Networks: the entire model is just attention and gating, no MLPs anywhere. Needle is an experimental run for single-shot function calling for consumer devices (phones, watches, glasses...).

Training: - Pretrained on 200B tokens across 16 TPU v6e (27 hours) - Post-trained on 2B tokens of synthesized function-calling data (45 minutes) - Dataset synthesized via Gemini with 15 tool categories (timers, messaging, navigation, smart home, etc.)

You can test it right now and finetune on your Mac/PC: https://huggingface.co/Cactus-Compute/needle GitHub: https://github.com/cactus-compute/needle

Do you have any examples or data on the discriminatory power of the model for tool use?

The examples are things like "What is the weather in San Francisco", where you are only passed a tool like

  tools='[{"name":"get_weather","parameters":{"location":"string"}}]',

I had a thing[1] over 10 years ago that could handle this kind of problem using SPARQL and knowledge graphs.

My question is how effective is it at handling ambiguity.

Can I send it something like a text message "lets catch up at coffee tomorrow 10:00" and a command like "save this" and have it choose a "add appointment" action from hundreds (or even tens) of possible tools?

[1] https://github.com/nlothian/Acuitra/wiki/About

  • Thanks to a Huggingface linked below, I tested it and im not impressed. prmopt: i need to contact my boss i will be late. Result: 20mins [{"name":"set_timer","arguments":{"time_human":"20 minutes"}}]. It didnt use the email tool and i tried 2-3 different ways of asking it.

    • Query: context: { "boss_email": "bigboss69420@corporatepersonhood.net", "upcoming_meetings": [{ with: "bigboss69420@corporatepersonhood.net", "time": "11:00" }] } user: i need to contact my boss i will be late, could you tell him I'll be 15 minutes late?

      Output: [{"name":"send_email","arguments":{"to":"bigboss69420@corporatepersonhood.net","subject":"upcoming_meetings","body":"I'll be 15 minutes late"}},{"name":"send_email","arguments":{"to":"bigboss69420@corporatepersonhood.net","subject":"time","body":"I'll be 15 minutes late"}},{"name":"send_email","arguments":{"to":"bigboss69420@corporatepersonhood.net","subject":"time","body":"I'll be 15 minutes late"}}]

      Context definitely helps. But yeah the quality of it doesn't seem to be too high. To be fair it makes you realise that not only is parameter extraction required, but also content generation (email body). Also debouncing the 3 tool calls.

      Maybe under very specific circumstances/very tight harness this sort of model would be useful?

    • works for me:

      input: i need to contact my boss i will be late. output: [{"name":"send_email","arguments":{"to":"boss@company.com","subject":"Running late","body":"I will be late for the meeting."}}]

      it did have the send_email tool on the left hand side though

      4 replies →

Hmm.. this might make it feasible to build something like a command line program where you can optionally just specify the arguments in natural language. Although I know people will object to including an extra 14 MB and the computation for "parsing" and it could be pretty bad if everyone started doing that.

But it's really interesting to me that that may be possible now. You can include a fine-tuned model that understands how to use your program.

E.g. `> toolcli what can you do` runs `toolcli --help summary`, `toolcli add tom to teamfutz group` = `toolcli --gadd teamfutz tom`

Are you worried about Google's response to this? Google reportedly reacts to distillation attempts "with real-time proactive defenses that can degrade student model performance". So if they detected you, they could have intentionally fed you a dumber but plausible variant of Gemini: https://cloud.google.com/blog/topics/threat-intelligence/dis...

But also, this model is small and just focusing on the tool use. In terms of token usage, you're probably not anywhere near the people that are trying to distill the entire model.

Suggestion: publish a live demo of the "needle playground". It's small enough that it should be pretty cheap to run this on a little VPS somewhere!

>Experiments at Cactus showed that MLPs can be completely dropped from transformer networks, as long as the model relies on external knowledge source.

Heh, what a coincidence, just today one of my students presented research results which also confirmed this. He removed MLP from Qwen and the model still could do transformation tasks on input but lost knowledge.

This is neat, and matches an observation I saw with early Claude Code usage:

Sonnet would often call tools quickly to gather more context, whereas Opus would spend more time reasoning and trying to solve a problem with the context it had.

This led to lots of duplicated functions and slower development, though the new models (GPT-5.5 and Opus 4.6) seem to suffer from this less.

My takeaway was that “dumber” (i.e. smaller) models might be better as an agentic harness, or at least feasibly cheaper/faster to run for a large swath of problems.

I haven’t found Gemini to be particularly good at long horizon tool calling though. It might be interesting to distill traces from real Codex or Claude code sessions, where there’s long chains of tool calls between each user query.

Personally, I’d love a slightly larger model that runs easily on an e.g. 32GB M2 MBP, but with tool calling RL as the primary focus.

Some of the open weight models are getting close (Kimi, Qwen), but the quantization required to fit them on smaller machines seems to drop performance substantially.

  • The key is to not run LLMs in loops. This trend of agentic frameworks is silly, and mostly exists to make LLM companies more revenue. An LLM is mostly useless but is much more useful and reliable with one shot tooling.

    I have a suite or tools ive built for myself on top of the openrouter api for very specific tasks. Press button amd LLM does (one) useful thing, not press button and let LLM run tool calls in a loop for 5 minutes and hope it does things in the correct order.

    If multiple tools need to be called to do a useful thing, I will chain those together deterministically in my code. This is much more reliable as I can check the output of A before proceeding to task B or C, also its more time and token efficient. Agentic loops are a huge scam.

  • > and matches an observation I saw with early Claude Code

    > though the new models (GPT-5.5 and Opus 4.6) seem to suffer from this less

    > My takeaway was that

    > haven’t found Gemini to be

    For the love of all that's holy, folks please stop investing your time to fill in the gaps that the Slop Corporations are leaving wide open in their "tooling". Why should you strain yourself in an attempt to "make it work" one way or another? Google, MS, Meta, OpenAI etc. are all now subtly pushing to call their tooling "Intelligence" (not even Artificial Intelligence), so why is it not intelligent? Why does it not work? 1T+ investments and still we should think of best magic chants and configurations to make the slop generators produce half-valid output? All while some of the tech leaders are openly threatening to subdue us in their weird visions of "civilisation" ? We have a better use for our superior brains, let's not denigrate ourselves into being helpless helpers to the magic oracle (if at least it was some magic oracle!)

That M versus B is way too subtle. 0.026B is my suggestion

How could you use this for composability? I.e. chaining together multiple tools. For example web_search → summarize_url → send_email

  • Looks possible E.g.

    Query: get the weather for san francisco and email the result to test@test.com

    Result: [{"name":"get_weather","arguments":{"location":"san francisco"}},{"name":"send_email","arguments":{"to":"test@test.com","subject":"San Francisco","body":"Please find the weather attached."}}]

I'm so excited for this, nice work!

Gemma4 edge models were promised to be great for agentic use, but have been really disappointing in all my tests. They fail at the most basic tool use scenarios.

Have you run any tool-use benchmarks for Needle, or do you plan to? Would be great if you could add results to the repo if so.

Lovely to see the push for tiny models.

I have been building for small (20B or less) models for quite a while. Highly focused/constrained agents, many of them running together in some kind of task orchestration mode to achieve what feels like one "agent".

I build (privacy first) desktop apps this way and I want to get into mobile apps with similar ideas but tiny models.

  • Commercial or FOSS? I've been researching the mobile side and it's very exciting!

    • Most of my own products are GPLv3 licensed. There are a few with MIT but I may switch to GPLv3. I want to make money with hosting though.

      Desktop apps are with Tauri, so they are also web apps if/when I sell hosting.

A lot of agent workflows really are just tool selection + argument extraction + structured output. How does this behave once workflows become multi-step and state starts accumulating across calls?

Dumb questions, from someone not in the field...

What is a distilled model?

Why doesn't Google do this (to make their models smaller)?

Seems like you could make a competitor to Gemini?

  • There are two answers already and neither is entirely adequate.

    In normal LLM training, you take a set of documents and have it learn to predict the future, then have some private RLHF/RLVR etc. data that it learns to produce good chat outputs from.

    In distillation, you take a set of prompts you are interested in, and record the big LLM's outputs, then train your small model to produce the same output as the big LLM.

    This has a few advantages - you can get performance much more quickly on your documents/prompts of interest, with a much cheaper training budget, and you don't have to worry about acquiring very expensive RLHF/RLVR training data.

    A lot of the very good Chinese LLMs got very good very quickly through distillation from frontier models, which is why Anthropic/Google/OpenAI are blocking it so aggressively.

    • For completeness sake I'll add a bit more.

      The concept of distillation is not new in ML, and there are nuances to it. Traditionally you would have access to the bigger model, and for LLMs specifically you can train the small model on the entire distribution of output logits at the same time. So this would train the small model to output scores for each token in a similar fashion to the large model. There's "more to learn" from the entire distribution, rather than just from the chosen token.

      But since you don't have access to this from the API providers, the next best thing is to use the outputs themselves and train on those. That's more like a "poor man's distillation". It's still good, and as you mentioned worked fairly well for models catching up. But a lab that develops both the big model and the small model could make it better. (or you could choose to distill from an existing open model).

  • No question is stupid!

    1. Distilled means taking the intelligence of a big model and compacting into a tiny model.

    2. Google already does so with FunctionGemma, but Needle argues that better performance could be achieved with 10x smaller model using our technologies.

  • Model distillation is lossy compression of big model to produce a smaller model.

    Smaller model requires less space on disk, less video memory, and less compute (cheaper hardware).

    Downside is that distilled model performs worse on the same benchmarks compared to original model.

This is pretty much exactly what I want for Home Assistant. I yell out, "Computer! Lights!" and it toggles the lamp in the room on or off. (I mean I can do that now, I think, but probably with a much larger model.)

I haven't played with it yet, but does it ever return anything other than a tool call? What are the failure modes? What if it doesn't understand the request? Does it ever say it can't find a tool? Does it get confused if there are two similar (but different) tools? Can it chain tools together (e.g. one tool to look up and address and another to get directions to the address)?

I mean, I plan on downloading the model later tonight and finding out for myself, but since I'm stuck at work right now, I figured I'd ask anyway...

Can it summarize text it fetches?

Come to think of it, this could be a nice model to have as the first pass in a more complex agent system where Needle hands of the results of a tool call to a larger model.

I will defiantly play around with this!

From all the models that do toolcalls the only thing I am confused is why did you pick the worst? Or maybe they are only bad in agentic work it fine for one shot toolcalls?

  • Gemini is pretty solid for 1-shot tool call and affordable as well.

    • My general understanding of the concenus on most models these days is that people consider google models to be some of the worst at tool calling, so certainly an interesting choice. Did you do any evals on this?

    • Hi, would love to know where you get that impression on 1 shot tool calling, was there concrete evaluation carried out? pretty new to this and was a bit lost when trying to compare models on different capabilities.

Can this be a Siri-like core? Set me a timer, tell me what’s the weather, etc. Here is transcribed text and available list of tools for the model to call, and voice the output.

I don't really understand what this is for... there is a lot of ML-researcher talk on the GH page about the model architecture, but how should I use it?

Is it a replacement for Kimi 2.7, Claude Haiku, Gemini Flash 3.1 lite, a conversational LLM for the situations where it's mostly tool-calling like coding and conversational AI?

  • It is for building agentic capabilities into very small devices like phones, glasses, watches and more. Does that make sense?

    • I'm having trouble understanding why someone would want that? Like, what are the product use-cases of such a thing? I understand why people want that for coding agents--although the jury is still very much out on whether those are terribly useful--but I cannot fathom what someone might want an agent to do on a cell phone? Is there some user-facing activity on a phone that's similar to coding with a tight, objectively measurable feedback loop (analogous to dev/compile/test)?

      EDIT: more of you cretins have downvoted than have replied.. so.. show your cards.

      11 replies →

No FFN is blowing my mind. This is pretty much "Attention Is ACTUALLY All You Need". Reminds me of BERT Q&A which would return indices into the input context, but even that had a FFN. Really exciting work.

  • I guess this had always been bugging me. I get while you need activation/non-linearities, but do you really need the FFN in Transformers? People say that without it you can't do "knowledge/fact" lookups, but you still have the Value part of the attention, and if your question is "what is the capital of france" the LLM could presumably extract out "paris" from the value vector during attention computation instead of needing the FFN for that. Deleting the FFN is probably way worse in terms of scaling laws or storing information, but is it an actual architectural dead-end (in the way that deleting activation layer clearly would be since it'd collapse everythig to a linear function).

    • > if your question is "what is the capital of france" the LLM could presumably extract out "paris" from the value vector during attention computation instead of needing the FFN for that.

      But how do you get 'Paris' into the value vector in that case? The value vector is just the result of a matrix multiplication, and without a nonlinearity it can't perform a data-dependent transformation. Attention still acts as a nonlinear mixer of previous values, but your new output is still limited to the convex combination of previous values.

      1 reply →

Is the idea here to add function calling to models that don't have it, or even improve function calling (qwen quirks)?

Nice catch. Using agent for simple tasks is inefficient and wasteful, Needle really resolves this. Looking forward to future upgrades!

Query: set a timer for 1 hour

Result: [{"name":"set_timer","arguments":{"time_human":"1 hour"}}]

Query: in 1 hour set a timer for 1 hour

Result: [{"name":"set_timer","arguments":{"time_human":"1 hour"}}]

I'd expect either a chain load or just a 2 hour timer. Further attempts humorously give two separate 1-hour-timers.

This is very cool I'm going to try to carve out some time to try building this into my MOO system ( https://codeberg.org/timbran/moor / https://timbran.org/moor.html ) as alternative command parser front end.

I assume this would only be useful as the second stage after a model like Whisper, as it can't understand speech where you'd want it, like on a phone or small device?

What is the use case for this?

  • Something like this together with MCP can replace APIs for 3rd party integrations. You just give it instructions to "post a message in slack" and provide it slack MCP tools and it figures out the rest on its own. No need to read up on slack API docs or worry about breaking changes.

I source old, defective high-end radios with timeless designs from brands like Grundig or Braun, and replace the original hardware with a Raspberry Pi while using the original audio parts to build custom smart speakers. Reliable hotword detection and voice command recognition have been a persistent challenge over the years, but whisper and other small models have helped enormously. At the moment I have ollama running on my server with qwen 9b which works fine but a 26M that could be deployed on the pi itself would be amazing.

FYI, distilling Gemini is explicitly against the ToS:

"You may not use the Services to develop models that compete with the Services (e.g., Gemini API or Google AI Studio). You also may not attempt to reverse engineer, extract or replicate any component of the Services, including the underlying data or models (e.g., parameter weights)."

  • Yeah I think Google should shove that somewhere. They effectively distilled all the internet's knowledge into these models...without asking & without permission

  • Thanks, Needle doesn’t compete with those tools though and the distillation process did not access the weights.

  • FYI, Gemini was developed using stolen copyrighted works without author consent. The double standard is striking.

  • This is being downvoted but it's worth noting if only for the "be careful" aspect.

    That said, we need more people distilling models IMO, just be ready for a C&D and a ban