← Back to context

Comment by d4rkp4ttern

6 days ago

For every new interesting open model I try to test PP (prompt processing) and TG (token gen) speeds via llama-cpp/server in Claude Code (which can have at least 15-30K tokens context due system prompt and tools etc), on my good old M1 Max 64GB MacBook.

With the latest llama-cpp build from source and latest unsloth quants, the TG speed of Qwen3.5-30B-A3B is around half of Qwen3-30B-A3B (with 33K tokens initial Claude Code context), so the older Qwen3 is much more usable.

Qwen3-30B-A3B (Q4_K_M):

  - PP: 272 tok/s | TG: 25 tok/s @ 33k depth

  - KV cache: f16

  - Cache reuse: follow-up delta processed in 0.4s

Qwen3.5-35B-A3B (Q4_K_M):

  - PP: 395 tok/s | TG: 12 tok/s @ 33k depth

  - KV cache: q8_0

  - Cache reuse: follow-up delta processed in 2.7s (requires --swa-full)

Qwen3.5's sliding window attention uses significantly less RAM and delivers better response quality, but at 33k context depth it generates at half the tok/s of the standard-attention Qwen3-30B.

Full llama-server and Claude-Code setup details here for these and other open LLMs:

https://pchalasani.github.io/claude-code-tools/integrations/...

I definitely get the impression there's something not quite right with qwen3.5 in llama.cpp. It's impressive but just a bit off. A patch landed yesterday which helped though.