← Back to context

Comment by yougotwill

7 hours ago

Can I ask how much RAM of the 32GB does it use? For example can I run a browser and VS Code at the same time?

1. the 35B model is a "Mixture of Experts" model. So the earlier commenter's point that it is "larger" does not mean it is more capable. Those types of models only have certain parts of themselves active (for 35b-A3b, it's only 3 billion parameters at a time, vs 27 billion for the model this post is about) at a time to speed up inference. So if you're interested in these things for the first time, Qwen3.6-35B-A3B is a good choice, but it is likely not as capable as the model this thread is about.

2. its hard to cite precise numbers because it depends heavily on configuration choices. For example

2a. on a macbook with 32GB unified memory you'll be fine. I can load a 4 bit quant of Qwen3.6-35B-A3B supporting max context length using ~20GB RAM.

2b. that 20GB ram would not fit on many consumer graphics cards. There are still things you can do ("expert offloading"). On my 3080, I can run that same model, at the same quant, and essentially the same context length. This is despite the 3080 only having ~10GB VRAM, by splitting some of the work with the CPU (roughly).

Layer offloading will cause things to slow down compared to keeping layers fully resident in memory. It can still be fast though. Iirc I've measured my 3080 as having ~55 tok/s, while my M4 pro 48GB has maybe ~70 tok/s? So a slowdown but still usable.

If you want to get your feet wet with this, I'd suggest trying out

* Lmstudio, and * the zed.dev editor

they're both pretty straightforward to setup/pretty respectable. zed.dev gives you very easy configuration to get something akin to claude code (e.g. an agent with tool calling support) in relatively little time. There are many more fancy things you can do, but that pair is along the lines of "setup in ~5 minutes", at least after downloading the applications + model weights (which are likely larger than the applications). This is assuming you're on mac. The same stack still works with nvidia, but requires more finnicky setup to tune the amount of expert offloading to the particular system.

It's plausible you could do something similar with LMstudio + vscode, I'm just less familiar with that.