Comment by mark_l_watson
4 hours ago
I have been running the slightly larger 31B model for local coding:
ollama launch claude --model qwen3.6:35b-a3b-nvfp4
This has been optimized for Apple Silicon and runs well on a 32G ram system. Local models are getting better!
Can I ask how much RAM of the 32GB does it use? For example can I run a browser and VS Code at the same time?