Comment by codazoda
5 hours ago
You mean Qwen3-Coder-Next? I haven't tried that model itself, yet, because I assume it's too big for me. I have a modest 16GB MacBook Air so I'm restricted to really small stuff. I'm thinking about buying a machine with a GPU to run some of these.
Anywayz, maybe I should try some other models. The ones that haven't worked for tool calling, for me are:
Llama3.1
Llama3.2
Qwen2.5-coder
Qwen3-coder
All these in 7b, 8b, or sometimes 30b (painfully) models.
I should also note that I'm typically using Ollama. Maybe LM Studio or llama.cpp somehow improve on this?
No comments yet
Contribute on Hacker News ↗