Comment by eli

1 day ago

Should be a bit faster if you run an MLX version of the model with LM Studio instead. Ollama doesn't support MLX.

Qwen3-Coder is in the same ballpark and maybe a bit better at coding