Accelerating Gemma 4: faster inference with multi-token prediction drafters

18 hours ago (blog.google)

Speculative decoding is an amazingly clever invention, almost seems-too-good-to-be-true (faster interference with zero degradation from the quality of the main model). The core idea is: if you can find a way to generate a small run of draft next tokens with a smaller model that have a reasonable likelihood of being correct, it's fast to check that they are actually correct with the main model because you can run the checks in parallel. And if you think about it, a lot of next tokens are pretty obvious in certain situations (e.g. it doesn't take a frontier model to guess the likely next token in "United States of...", and a lot of code is boilerplate and easy to predict from previous code sections).

I always encourage folks who are interested in LLM internals to read up on speculative decoding (both the basic version and the more advanced MTP), and if you have time, try and implement your own version of it (writing the core without a coding agent, to begin with!)

  • > it's fast to check that they are actually correct with the main model because you can run the checks in parallel.

    Can you give an intuition as to why it's faster? I would have thought regardless how many you run in parallel, the successful check has to execute the full model to generate the full sequence so you will have exactly the same time needed? Or is it by process of elimination so it terminates early once it eliminates the non-viable choices? (in which case, how do you guarantee the correct output was speculatively generated at all to be the last survivor?)

    • The small draft model proposes a sequence of tokens d1 d2 d3.

      The big target model calculates

      P(d1)

      P(d2|d1)

      P(d3|d1 d2)

      In parallel. If we were just greedy decoding it would be simple. Just stop when the draft model doesn’t predict the most likely token as judged by the target model. At that point, append the correct token from the target model and kick off both models again in parallel.

      In practice we aren’t using greedy decoding. We are sampling and we need to match the target model’s distribution. To do this, we accept tokens from the draft model probabilistically, which is possible because we have the logits of both the draft model and the target at that point. The ratio of their softmax probabilities is used for this.

      You are right that actually accepting tokens has to happen sequentially but that’s a heck of a lot faster than a forward pass.

      1 reply →

    • AIUI you run the checks of several predicted tokens in lockstep, and the computation for each token is served by the same data loaded from memory. In normal execution, each token would depend on the previous one, precluding the parallelization and causing much more per-token memory traffic.

      So this is a case of trading off idle compute capacity that's waiting for the bottleneck (memory access).

    • An obscure fact about the transformer architecture is that it more or less computes the most likely next token for every single token in the context window at once. This is because the KV cache values needed to predict the next token are needed for every token, and the attention modules do nearly all the work, so once you computed the KVs running them through the last sections to get the target probabilities is nearly free.

      The reason it's designed this way is a bit subtle but it has the advantage during training that you can use a single block of 10 tokens to generate 9 training examples in parallel, so it's highly efficient. This efficiency is basically the main benefit of transformers - the algorithm parallelizes really well and that's what allowed the scale up to large language models as opposed to the previous reality of just language models.

      The blog post does discuss why MTP is faster but it's maybe a bit hard to understand if you haven't studied LLM internals. During inference the hardware has arithmetic units idling because they spend so much time waiting for the weight matrices to get moved closer to the processors. Because data movement and computation can be overlapped, if you can reuse the same loaded data for multiple calculations at once you're winning - it's free latency-wise because you're just exploiting previously idle resources (it's not free in terms of energy).

      Speculative decoding and MTP exploit this to run the model in parallel on several tokens at once. Say your context window contains "The United". The KV cache has been populated by the main model for this set of tokens. The draft model is given "The United" and predicts " States of America" in one forward pass (this part where it can predict multiple tokens at once with a single pass is the MTP part). Then the main model is given the KV cache from last time along with " States of America". In its own forward pass it can then compute in parallel the completions of both "The United", "The United States", "The United States of" and "The United States of America" (the last one might be an eos token indicating it wants to stop talking.). That's the speculative decoding part.

      Now you decode the main model at each position (look at the token probabilities and pick one according to some decoding strategy). It's possible the main model didn't pick " States" at all, or picked " States", but then its prediction diverged e.g. if it wants to say "The United States is a country". So you just select the tokens that match and toss all the tokens starting from the one that didn't. Repeat.

      The parallelism comes almost for free because the same weight matrices can be reused multiple times before they're swapped out for the next.

  • So we've basically taken the concept of branch prediction from CPUs and applied it to LLMs?

    • The concept of predicting future elements in a series is not specific to CS. It's older than computers.

    • Well, the TPUs they're running on don't have branch prediction, so that had to end up somewhere in the stack.

  • Naively it seems odd that running multiple checks in parallel is faster than just running the autoregressive model multiple times in series. It’s the same amount of compute right?

    But I think the key is that in the standard autoregressive case we get memory bandwidth bound, so there are tons of idle compute resources. And so checking multiple tokens is cheap because we can batch and thus reuse the read weights for multiple tokens.

    The verification step is similar to a prefill with a small batch size. The difference is what we do with the generated logits.

    • That’s correct, and yes - not less compute total on the main model (actually slightly more, since checking failed draft tokens costs you compute), but faster because inference is memory-bandwidth bound. And like you I also think of it as like a “mini prefill” (but on top of the existing KV cache, of course); the code is very similar to prefill if you implement a simple toy version yourself.

      Most of the complexity in implementing a simple toy version comes from having to get the KV cache back into a good state for the next cycle (e.g. if only the first half of your draft tokens were correct).

    • > But I think the key is that in the standard autoregressive case we get memory bandwidth bound, so there are tons of idle compute resources.

      Right, this is the same way batching works. It's "free" until we exhaust available compute resources, at which point decode throughput becomes compute bound. (This is a good place to be, because scaling out compute is a lot easier than adding fast VRAM.) This is why MTP is mostly useful when you have one or few users, which means compute is abundant. When you're running large batches you're better off using that compute to grow your batch size.

      Of course, batch size is usually limited by things like bulky KV caches. So perhaps MTP has some residual use in that setting. But if you're sharing cached context in a subagent swarm, or running a model like the recent DeepSeek V4 with its tiny KV cache, you can go a lot further in processing a larger batch.

      3 replies →

I don't see it talked about much, but Gemma (and gemini) use enormously less tokens per output than other models, while still staying within arms reach of top benchmark performance.

It's not uncommon to see a gemma vs qwen comparison, where qwen does a bit better, but spent 22 minutes on the task, while gemma aligned the buttons wrong, but only spent 4 minutes on the same prompt. So taken at face value, gemma is now under performing leading open models by 5-10%, but doing it in 1/10th the time.

  • Anecdotally the 15/month basic Gemini plan allows coding all day. I'm not hitting the limits or needing to upgrade to 100/month plans like other people are doing with Claude or Codex.

    Caveat: Gemini has been dumbed down a few times over the last year. Rate limits tightened up too. So it might not be this good in the future.

    • In the past I've usually found that Gemini (pro, flash) would get stuck on a problem and then seemingly start to do some kind of random search trying this and that just burning through tokens. When this would happen I'd switch (in antigravity) to Claude sonnet 4.6 and it would cut right to the chase and find the problem quickly. But the other day I was out of Claude tokens so I went back to Gemini 3.1 Pro and asked about a verilog simulation problem that Claude had been stuck on - and it figured it out in a few minutes.

      2 replies →

    • Where are you using it? Is Gemini CLI at a usable state? It was a frustrating, miserable experience last time I gave it a shot.

      Antigravity seems significantly better in comparison, but with lower usage limits. If I run out, I usually don't bother switching to Gemini CLI.

      16 replies →

    • I got really burned by that quality reduction. I subscribed to the AI pro level, and was using it quite a bit, but I stopped because I had to be super attentive to the output because it would make simple mistakes. It was really a shame, because for a while they're Gemini was the best and the AI pro level would allow you enough usage to use it throughout the day as long as you weren't hammering it

    • Just a heads up that you cannot opt out of training on any of their "personal" plans (including Ultra) last time I checked. Both Claude and ChatGPT allow you to opt out of training on their paid plans.

      It would be nice if this was a bit more obvious and clear too.

    • I find Gemini to be quite good / acceptable at code review, design, and design review, but it's notably far behind Claude Code for implementation.

      Are you having better results?

      Codex is fast and decent, but I REALLY have to stay on top of it. The amount of times it makes executive design decisions on the fly to completely break everything is way too high.

      1 reply →

    • no 15/month does not enough all day? pls dont share wrong info, 3.1 pro CLI sometimes wait 20-30 min thinking sometimes, it's by far worse compared to others.It finishes with few hours of work mostly, but in openai they give you 6 times of that in 24 hours, gemini resets one time a day. It is literally lazy and so many times does half work. I'm a power user for all top models in top 3 AI companies, only Gemini 3.1 waits so long and it's so slow. Even Gemini pro 3 and pro 2.5 was not like this at all

      3 replies →

    • I don't know if people know this, but using it all day (say 8h) costs between 0.7 and about 14 kg of CO2 in the US, depending on which region's grid power they use (or, if they run off of generators, the gCO2e/kWh might be very different from these bounds). With 225 working days per year (assuming no night or weekend use), in the worst region that's 50% of the CO2 the average european person uses in a year, just for this assist function; in the best region (a few counties currently running on 100% hydropower) it makes no difference of course because the energy is running down the hill whether you use it or not. Maybe it could otherwise have been exported or stored but there's only so much interconnect and storage

      Edit: and this 15$ subscription (again assuming 225×8h use per year divided by 12 months) uses the equivalent of about 150€/month worth of electricity at the rate I'd pay at home. That sounds close to the cost price (ignoring capex on the servers and model training) Google would be able to negotiate with electricity providers. Would be interested in how this works out for them if someone knows

      17 replies →

  • In the Dwarkesh's podcast Dylan Patel from SemiAnalysis said that Google can currently afford to have larger models than competitors, because of access to much more compute, TPUs etc.

    That could explain the token usage difference because larger models usually use less tokens per the same unit of intelligence.

  • Gemini models, even if not so good at coding, are also competitive with GPT-5.5 and Claude Opus 4.7 in a lot of tasks while having considerably less parameters.

  • Claude is very fashionable right now, but I've never had any problems or felt the need to switch.

    Maybe after Google I/O, more people will catch on to how good it is.

  • This is true, we have the numbers to back it up on https://gertlabs.com/rankings?mode=oneshot_coding (check out the efficiency chart too)

    GPT 5.5/5.4 are the smartest models, but at great token / code bloat cost. Qwen 3.6 Max strikes a good balance. But Gemma 4 26B writes some really efficient code, with great results considering the model size. Things do start falling apart under higher contexts.

  • I think you can see this one of two ways: you could also consider it a miracle that the qwen models are able to perform so well when being trained on inefficient wrapper code data.

  • One of the consequences of Gemma's speed is that you can run it on a GPU that's technically too small for it. I've run it on my 4070, and while the output wasn't blazingly fast, it was usable. (Though I haven't used it for anything complex yet. I'm sure that will be different.)

  • it won't really do much if you try to code with it. i plugged it into xcode and it failed to change a variable.

MTP support is being addedto llama.cpp, at least for the Qwen models ( https://github.com/ggml-org/llama.cpp/pull/20533) and I'd imagine Gemma 4 will come soon.

The performance uplift on local/self-hosted models in both quality and speed has been amazing in the last few months.

  • There is a newer PR which will probably be merged soon: https://github.com/ggml-org/llama.cpp/pull/22673

  • A few days ago I switched again from Qwen3.6 to Gemma 4 - for personal use I've experienced better average performance with the 26B version of the latter than the 27B of the former.

    For someone who's been running local models for a long while, these are very very exciting times.

    • Oh that's fascinating. 3.6 27B is pretty damned good, but slow in wall-clock times on my DGX Spark-alike. It generates huge reams of thinking before it gets the (usually correct!) answer, so wall-clock time is rough for tasks even at ~20tk/s

      I'm surprised the 26B-A4B is better? It should be faster too, interesting. I'm excited to try 31B with MTP, because MTP-2 is what makes 27B bearable on the GB10.

      What are you using it for? Agent-based coding, or something else?

    • I've been thinking about doing more of this too. What spec machine are you running? And are you using long-running autonomous agents or more of the IDE/co-pilot style of collaboration?

  • I have a dumb performance question.

    Why when asking a model to change text in a minor way; are we not asking it to generate the operational transformations necessary to modify the text, and then just executing the ot on the existing text vs reproducing every token? Maybe tools are doing that more than I realize?

    • The only thing a model can output is tokens; to achieve this, a tool of converting tokens into operational transformations is required. For example, I have an ast-grep skill, it will instruct the model to generate ast-grep rules and run ast-grep to perform file modifications.

      3 replies →

    • The simple answer is: because it is not necessary to achieve the same final output. Most LLMs today are trained as autoregressive token predictors. They fundamentally can't work any other way. But we know how to train them really well and they have many applications beyond editing text. Diffusion LLMs exist too, which work a bit closer to what you describe, but they are not yet at the same level of intelligence since training methods are not that mature and they are generally less flexible as well.

      4 replies →

    • I've seen Claude use sed to edit files on other hosts instead of copying the file back and forth to edit it. Not quite full blown OT but it's going in that direction.

  • I don’t exactly know where MTP inference fits within the inference stack, but does someone know whether it’s possible to implement it for the MLX universe?

  • Thanks for the link,it took qwen3.6-27B-q8 w/256k context on my RTX A6000 from ~20t/s to 55t/s. Prefill is mysteriously slower however, but prefill is so much faster still that I think I'm still bottlenecked on output most of the time.

  • How does this get added in practice?

    • According to the linked PR, the original model does come with MTP which is another "head" (=output path) in the same model and (supposedly) runs faster.

      The current implementation ignores that head but the PR let the tool recognize it, plus does proper integration (run the MTP while running the slower main path then compare the result, I believe.)

      1 reply →

Google is singlehandedly carrying western open source models. Gemma 4 31B is fantastic.

However, it is a little painful to try to fit the best possible version into 24GB vram with vision + this drafter soon. My build doesn't support any more GPUs and I believe I would want another 4090 (overpriced) for best performance or otherwise just replace it altogether.

  • You could keep multimodal projector (understanding of audio, images & PDFs) in system RAM with `--no-mmproj-offload` in llama.cpp. Of course, then it is not accelerated with GPU, but you save its VRAM.

  • Qwen is still better that Gemma though. Also you can tune it more for different tasks, which means that you can prioritize thinking and accuracy versus inference speed.

Watching the computer write text sort of reminds me of using a modem to call a BBS in the old days. This seems like going from 300 baud to 1200 - a significant improvement, but still pretty slow, and someday we will wonder how we put up with it.

  • This is something I've been thinking about for a while...the current state of things really does feel kind of like the dialup era, wondering what the "broadband" era could look like. Watching tokens stream in is reminiscent of watching a jpeg load a few rows of pixels at a time, and the various different loading and connecting animations that applications implemented before things got fast enough to make them less relevant.

    Some of the work in that direction like Cerebras or Taalas have been doing is an interesting glimpse of what might be possible. In the meantime it's a fun thought experiment to wonder about what might be possible if even current state of the art models were available at like, a million tokens per second at a very low cost.

    • Groq was the preview of the broadband era of LLMs for me. I remember asking a question on the demo site and the answer text showed up near instantly. Far faster than I could read. This was ~1 year ago and pre-acquisition.

  • You're right about it being reminiscent of the dial-up area, but I don't believe it's 300 to 1200; it's more like 4800:

    Modem vs Claude according to Claude:

    300 @ 2368 characters - 1m 19s

    1200 @ 2368 characters - 19.7s

    2400 @ 2368 characters - 9.9s

    14.4K @ 2368 characters - 1.6s

    33.6K @ 2368 characters - 705 ms

    56K @ 2368 characters - 447 ms

    Claude @ 2368 characters - 7.9s

  • There was a startup posted here which built custom hardware that let the AI respond instantly. Thousands of tokens per second.

The "how to get started" asks you to read "documentation" which turns out to be a sales blurb. Am I missing something?

I’m starting to think that googles strategy is a bit different then the other frontier providers.

Focusing more on performance to compute efficiency over pure performance. And maybe that’s why Gemini is (seemingly) lagging behind?

Other providers hitting capacity and hitting the limits subsidising their inference.

Google strategy seems to be about scaling and distributing these models to their existing billions of users.

  • I don't view Gemini as falling behind. I actually view it as a somewhat distinct type of intelligence compared to the latest iterations of GPT5 and Claude. The latter are, increasingly, very focused on productivity and automation of work tasks. They're optimized for long, agentic, self-correcting reasoning loops. Gemini is very different: it feels to me like a much smarter baseline model, with much deeper intuition (especially its Deep Think mode), but it's not nearly as good at long-range self-corrective agentic loops. For months now my workflow has been to use Gemini for creative leaps and insights, while preferring Codex or Claude or GPT5.5 Pro for routine or precision work.

  • > Google strategy seems to be about scaling and distributing these models to their existing billions of users.

    Yeah, part of that is installing a model in chrome to millions of users without consent.

  • Isn't that where everyone's strategy is shifting?

    • Yes, but I think Google was playing that strategy from essentially day 1 or very early in this AI race, where as the others are there now because of their lack of access of compute.

      The general narrative I would read on HN/others, was that Google would be able to outlast/outcompete OpenAI and Anthropic because Google had both more money and more compute. Playing the game of subsidizing their most capable models to capture market share longer than the VCs could.

      But instead I feel like Google opted out of that much earlier. Shifting their focus on efficiency and scaling much much earlier. Flash and Gemma being where Google was actually ahead of the competition while everyone was focused on bigger more capable models.

      In the last month the environment has changed, compute is constrained, costs for consumers are way higher than expected. Copilot pretty much imploded, and I'm guessing both Anthropic and OpenAI are starting to feel the squeeze.

      My personal opinion was this was necessary because integrating AI into products like AI overview, search meant scaling to billions of users was a requirement right out of the gate. And theres not enough money/compute no matter who you are to use frontier models for that.

      2 replies →

I recently set up the 26B A4B model up on vLLM on an RTX3090 (4-bit) after a hiatus from local models. Just completely blown away by the speed and quality you can get now for sub-$1k investment.

I tried first with Qwen but it was unstable and had ridiculously long thinning traces!

  • Some of the early quants for qwen3.6 were broken. It's still finicky but with a little hand holding it's crazy.

    Local models are the future it's awesome

  • The 31B is surprisingly fast too, for a dense model. Runs tg at least twice as fast as it ought to on my machine when compared to other 30B, probably due to the hybrid attention I guess. Ingestion is somewhat slower though.

In my testing the Gemma 4 31b model had the biggest speed boost in Ollama w/ the MLX runner for coding tasks (at about 2x). Unfortunately you'll need a pretty beefy Mac to run it because quantization really hurts the acceptance rate. The three other smaller models didn't perform as well because the validation time of the draft model ate up most of the performance gains. I'm still trying to tune things to see if I can get better performance.

You can try it out with Ollama 0.23.1 by running `ollama run gemma4:31b-coding-mtp-bf16`.

Has anyone managed to get this to work in LM Studio? They've got a option in the UI, but it never seems to allow me to enable it.

Really excited to try this once it is merged into llama.cpp.

Gemma 4 26B-A4B is much quicker on my setup vs Qwen3.6-35B-A3B (by about 3x), so the thought of a 1.5 speedup is tantalizing.

Have tried draft models to limited success (the smaller 3B draft model in addition to a dense 14B Ministral model introduced too much overhead already)

  • On vllm with a 5090 I get 120-180TPS with the awq 4 bit quant + MTP speculative decoding

    For gemma4 26B, same quantization, I get >200TPS.

    Also note that qwen is extremely inefficient in reasoning; the reasoning chains are ~3x longer than gemma on average

Sounds like a game changer if I see that kind of speed up on my hardware. So far I've prefered Qwen 3.6 because of its better tool handling, even though Gemma 4 is faster, but I saw they've updated the model template and that's supposed to be better now. Looking forward to trying this with llama.cpp.

  • gemma4 has a specific problem with toolcalls that affects most runtimes. fixes for ollama and vllm are being worked on right now

    • The chat templates of all Gemma 4 models have been updated 7 days ago, to fix some bugs related to invoking tools.

      So any tests done with models that have not been updated during the last days are no longer relevant and they must be repeated after updating the models and regenerating any other file formats, like GGUF files.

    • I read somewhere you need to drop temp to 0.1 on gemma for tools.

      Not sure why (too amateur sorry).

      Though I think qwen was natively trained on toolcalling.

This might be silly, but … since the assistant models are so much smaller than the full models. What if we just use those smaller models?

Any idea how much worse they will be ? Or is the issue that their error will really diverge as you accept more of their tokens?

  • I think they'll be extremely worse on their own

    Predicting "America" in "The United States of ..." Is a different task from predicting the whole sentence.

    So the small model is laying the blocks, and the bigger model would be cementing them in place or kicking them down. The bigger model's course correction is what keeps the smaller models predictions relatively on track

  • I assume these are just output layers that are trained on the hidden state from the larger model - that's how MTP works. It's not a separate drafting model.

So this is like branch prediction for operating systems? Except we have probability baked into the model itself so it’s even more reliable.

  • similar idea, but the failure mode is better. a branch mispredict burns cycles. a bad guess here usually just means no bonus tokens. https://arxiv.org/abs/2211.17192

    • As long as you're not bound on parallelism or bandwidth then it's "free", but if you're constrained on either resource then your lighter predictor model just needs to save you more cycles than it congests on average.

    • A bad guess still costs cycles, but the penalty is smaller compared to branch mispredict in the current state. But if we have some kind of pipelining, like if we have something that assumed the speculative decode is correct, then it'll be expensive again.

I find it puzzling Google doesn’t actively promote its own cloud for inference of Gemma 4. Open source is great, love it. But shouldn’t Google want me to be able to use and pay for it through Gemini and vertex?

  • A key thing to understand about Google is that under the hood is a collection of extremely powerful fiefdoms (many of which would stand as their own fortune 500, hell 100) that are all trying to act in their own interest. It's almost closer to a conglomerate than a company, where Google needs to bid internally against external players for resources.

    If Gemma 4 is less lucrative than Claude to the Google Cloud kingdom, the Cloud kingdom will want you using Claude.

    • interesting. presumably this is why google is selling TPUs externally instead of hoarding them for deepmind.

  • There is a decent yt here going through what google's logic with gemma overall might be

    https://www.youtube.com/watch?v=sXgZhGzqPmU

    As for why cloud offer it - think it's just an effort to promote the brand. The gemmas are pretty small so they can host it without it being a major drain on the company. They have the infra anyway

  • I wonder if for a model that small with a permissive license it might not be worth their time to host a commercial grade inference stack?

    Might be easier to chuck it over the fence and let other providers handle it as it'll run in almost any commercial grade card?

    Also speculating, but I wonder if it might also create a bit of a pricing problem relative to Gemini flashlight depending on serving cost and quality of outputs?

    As a comparison, despite being SotA for their size, the smallest qwen models on openrouter (27b and 35b) are not at all worth using, as there are way bigger and better models for less oricemon a per token basis

    • If you were to believe a lot of metrics Gemma 31B it’s much better than flash lite. It seems like I should be able to pay Google to use it and that should be at least a secretary, called action how I can do that but it’s missing from both the blog post entirely.

    • i dont know what are you talking about, i replaced an older gpt4o with a finetuned qwen. there is a huge amount of "AI, that can be done with those models, or partly by those models." Huge amount of people would not notice the difference. And if you prepare the context correctly, even bigger slice of people would not notice.

      3 replies →

  • Makes me wonder about the partnership with apple to use gemini. safe to assume apple has a preference for on-device, and the best open model (for consumer hardware at least) is a google property with an apache 2 license. Interesting dynamic and seemingly a bright spot in the market

  • You can use it for free with Google AI studio (free tier or paid tier accounts with different limits). Or use the paid version from Vertex AI which is around 3x cheaper than Gemini 3 Flash.

    I'm using Gemma 4 31B in my app with 5 agents, 1.5k requests per day, each.

  • What do you mean? It just works with Google AI Studio.

    • Part of the issue is Google complex web of products. There’s vertex Gemini Google AI studio Google edge. But I literally had trouble finding how to use this in my existing paid Gemini API account.

CloudFlare offers excellent service for many of the open-weights models. It's fast, cheap and simple to set up. Can highly suggest as an LLM provider.

They serve gemma-4-26b-a4b-it.

I am getting 21 t/s on Fold 7, 21 x 1.8 = 37.8 t/s compared to M1 Max's 54 t/s, that is impressive

I wonder what latency and tok/s this model on Groq or Cerebras would be capable of. I have a couple LLM driven games [1][2] where speed is really important to the experience. Currently the best performance I can get is the gpt-oss models on Groq or Cerebras but they need quite a bit of extra context and tools to correct for mistakes. I'm making a bet I'll be able to get the same performance much cheaper in the next few months.

[1] https://sleuththetruth.com [2] https://lextension.net/

How is this different from the speculative decoding that we had before?

You could pair a big and small model like qwen 32b with qwen 4b and had that same dynamic of the small model generating tokens and the big one "certifiying" them.

The blog says something about re-using the big model's data?

  • Multi token prediction is the same thing as speculative decoding. This is mentioned in the Google pages describing their MTP implementation.

    Google has now provided small models for each of the previous Gemma 4 models, e.g. "gemma-4-26B-A4B-it-assistant" for "gemma-4-26B-A4B-it".

    The difference vs. Qwen is that here each small model is not some general-purpose smaller model, but a model that has been optimized specifically for this task, to predict the output of the bigger model with which it is paired.

    This specialization and optimization of the Google "gemma-4-*-assistant" models ensures that they are much smaller and thus much faster than general-purpose small models.

    • So then these models could be used by llama.cpp today with the -md switch?

      Interesting, must try tomorrow.

  • As far as I can tell MTP is unique from regular speculative decode because the small model is trained to consume and operate on the big model's hidden state for prediction.

these are the updated models:

google/gemma-4-31B-it-assistant

google/gemma-4-26B-A4B-it-assistant

google/gemma-4-E4B-it-assistant

google/gemma-4-E2B-it-assistant

  • for anyone wanting a glossary to explain the naming scheme here:

    E4B = 4B effective parameters (using per-layer embeddings)

    E2B = 2B (like above)

    it = instruction tuned (rlhf and all that jazz)

    assistant = Multi-token drafters (the new 2x speed up)

I'm not sure I understand how this work https://huggingface.co/google/gemma-4-E4B-it-assistant has 78.8M parameters while the standard variant https://huggingface.co/google/gemma-4-E4B-it has 8B parameters.

Is gemma-4-E4B-it-assistant a model I can use stand-alone or a model I need to use in combination with gemma-4-E4B-it?

  • You need the regular gemma model as well. You can think of this as a really small distillation of the original. Useless by its own because it often is wrong, but it is fifth more than not. And because verifying a transformer model can be done faster than running it. We can effectively speed up by using this draft model and only doing the compute where it was wrong.

    This is a oversimplification, but tldr you need both yes.

    • Thank you!

      I already played with Gemma4 on oMLX a while ago. When I have some time I'll check if it supports running MTP models and play a bit more

nice, will run it later agains qwen3.6 27b, the speed was one of the reasons why in was running qwen and not gemma. the difference was big, there is some magic that happpens when you have more then 100tps.

Does this mean there will be new Gemma 4 models released with MTP, or are they already available in existing models + quants?

  • For each of the 4 gemma-4-*-it models there has been published an associated small model gemma-4-*-it-assistant, to be used for MTP.

    If a GGUF file is generated for MTP, it must include both the big model and the small model. There was a reference in another comment to a PR for llama.cpp, which also included updates for the Python program used for conversion from the safetensors files, which presumably can handle the combining of the two paired Gemma 4 models.

  • They have now been released on e.g Hugging Face with model suffixes "-assistant".

3x faster inference means cheaper api costs tooo. For solo dev building ai this matters a lot

  • Not necessarily. Servers serving the model likely has enough traffic that they are batching decodes already. MTP reduces latency and increase efficiency only when the server can’t batch enough concurrent streams to be compute bound rather than memory bound.

So much faster inference with no quality degradation? All that for just some small memory overhead (drafter models are <1B it seems)?

  • MTP requires a separate KV cache, so there is more memory overhead than just the weights of the MTP model, but it's a manageable amount.

    • From the linked post, it didn't read like a separate KV cache was needed:

      > The draft models seamlessly utilize the target model's activations and share its KV cache, meaning they don't have to waste time recalculating context the larger model has already figured out.

      1 reply →

  • Is it really no quality degradation?

    I'm curious where my understanding is wrong, but I didn't think you necessarily got the exact same output with how I understand speculative decoding to be used. I thought that if the small model produces tokens that are "good enough", meaning within the top few tokens the larger model produces, they're accepted.

    I thought it doesn't necessarily have to produce the exact same token the larger model would have produced to be accepted (and that requiring this would reduce the hit rate by a lot.) Just one the top model could have produced with whatever top-k and temperature settings.

    • It really is. This is because LLMs with a single output/user are strongly bandwidth limited. Although the hardware can generate multiple tokens simultaneously, it is slowed down if the tokens depend on each other, as is the case with regular text generation.

      The draft model essentially predicts the next token quickly, enabling you to start generating the subsequent token in parallel. If the guess is right, the second generated token is correct. If it is wrong, the second generated token is also potentially wrong, so it must be generated again using the correct prior token obtained through the big model.

      A poor draft model will simply slow down the process without affecting the output.

      7 replies →

    • Speculative decoding batches multiple completions on all possible outcomes (0/1/2 draft tokens accepted) and sees if big model deviates at any point -- thus verifying each token. So there's no difference in output.

  • It's based on taking advantage of spare compute if you have it. A tiny model generates a few steps ahead first, then the large one runs batch inference on all of those at once as if you are at that point in time. If they all check out afterwards it jumps ahead, otherwise it discards and goes onto the next one.

    Not sure about this implementation, but conceptually it only works well on very capable GPUs for very predictable output. Typical speedup is about 30%, not sure how google is claiming 250% which is ridiculous.

    And if you don't have enough compute, then you get negative speedup from all the extra overhead.

Tested gemma4 26 MoE 4bit quantisized gguf on llama.cpp following these guides with mmap'd I/O on a 16GB MBP and it was unbearably slow (0.0 t/s).

This is sort of similar to Ethereum and maybe a bit of zero knowledge proofs but with the LLM handling both sides.

Did DeepSeek come up with MTP? It was listed prominently in their recent paper as being carried forward from the previous release.

  • i think this is mixing two separate ideas. MTP is the training-side piece. speculative decoding is the inference trick. DeepSeek V3 used MTP as an auxiliary loss. the 2022 Google paper is speculative decoding. now Google is combining them. https://arxiv.org/abs/2404.19737

    • Oh... so MTP is not speculative decoding? The (T)oken (P)rediction made me think it was on the inference side. I shall read the paper.

      Edit: Ok, I understand now. You are saying that MTP has two aspects. 1) The training (for the mini-models to generate tokens), and 2) The actual speculative decoding implementation on the inference side (which uses those trained mini-models).

don't know about this guy, but qwen3.6:27b with the UD 4bit quant and little-coder/pi has been amazing. the first local LLM experience that can do actual meaningful work

Is Google's local model strategy tuned to pegging down big AI cloud labs a notch?

  • dumping money into Gemma and shorting new data center buildouts is a level of Corporate Vision that ends up in an HBS case study

if someone wants to work with gemma and dont deal with ollama or configs - there is (my baby) https://airplane-ai.franzai.com/

Beta but useable

  • LM Studio (for example) is free, can you pitch me on your USP vs. it?

    • easiness of install (one download), zero configuration, zero online access by design - there will never we websearch, never any kind of tracking, your prompts stay on your device - you can totally put in user data, confident contracts, ...

      plus over time the harness - coming version has a hotkey for screen capture, next release will have support for native excel, docx export

      there is value in being offline by design

      1 reply →

  • biggest pain is currently waiting for apple for the next release with updates mac os app store screenshots

Gemma 4 is really a beast. The 31B version is totally usable like for cases when I'm bored without internet

I found that Gemma 4:26b makes way more mistakes compared to Qwen and Gemma 3. Gemma3 27b QAT was my goto for some time as this was quite fast. Qwen is still king for a balance of accuracy and inference speed.

Gemma:31b was more accurate but speed was horrendous.