Comment by dsrtslnd23
6 days ago
What hardware are you running the 30b model on? I guess it needs at least 24GB VRAM for decent inference speeds.
6 days ago
What hardware are you running the 30b model on? I guess it needs at least 24GB VRAM for decent inference speeds.
Im running qwen3-coder:30b-a3b-q8_0 @ 32k context. Comes out to 36gb and Im splitting it between a 3090 24gb and a 4060ti 16gb (ollama put 20gb on the 3090 and 13.5 on the 4060ti) , runs great tbh. Ollama running in ubuntu server and Im running claude code from my windows desktop pc.
The general rule to follow is that you need as much VRAM as the model size. 30b models are usually around 19GB. So, most likely a GPU with 24GB of VRAM.
But this also means tiny context windows. You can't fit gpt-oss:20b + more than a tiny file + instructions into 24GB
Gpt-oss is natively 4-bit, so you kinda can
1 reply →
I'd like to know this, too. I'm just getting started getting my feet wet with ollama and local models using just CPU, and it's obviously terribly slow (even 24 cores, 128GB DRAM. It's hard to gauge how much GPU money I'd need to plonk down to get acceptable performance for coding workflows.
I tried to build a similar local stack recently to save on API costs. In practice I found the hardware savings are a bit of a mirage for coding workflows. The local models hallucinate just enough that you end up spending more in lost time debugging than you would have paid for Sonnet or Opus to get it right the first time.