Comment by docheinestages
4 hours ago
Has anyone tried using this with a Claude Code or Qwen Code? They both require very large context windows (32k and 16k respectively), which on a Mac M4 48GB serving the model via LM Studio is painfully slow.
4 hours ago
Has anyone tried using this with a Claude Code or Qwen Code? They both require very large context windows (32k and 16k respectively), which on a Mac M4 48GB serving the model via LM Studio is painfully slow.
context window for Qwen3.6 models' size increase isn't that bad/large (e.g. you can likely fix max context well within the 48GB), but macbook prompt processing is notoriously slow (At least up through M4. M5 got some speedup but I haven't messed with it).
One thing to keep in mind is that you do not need to fully fit the model in memory to run it. For example, I'm able to get acceptable token generation speed (~55 tok/s) on a 3080 by offloading expert layers. I can't remember the prompt processing speed though, but generally speaking people say prompt processing is compute bound, so benefits more from an actual GPU.
I had the best success yet earlier today running https://pi.dev with a local gemma4 model on ollama on my m4 Mac with 48GB ram. I think pi is a lot lighter than Claude code.
Try running with Open Code. It works quite well.