Comment by ufish235

7 days ago

Can you give an example of some coding tasks? I had no idea local was that good.

Changed into a directory recently and fired up the qwen code CLI and gave it two prompts: "so what's this then?" - to which it had a good summary across stack and product, and then "think you can find something todo in the TODO?" - and while I was busy in Claude Code on another project, it neatly finished three HTML & CSS tasks - that I had been procrastinating on for weeks.

This was a qwen3-coder-next 35B model on M4 Max with 64GB which seems to be 51GB size according to ollama. Have not yet tried the variants from the TFA.

I personally have used Qwen2.5-coder:14B for "live, talking rubber duck" sorts of things.

"I am learning Elixir, can you explain this code to me?" (And then I can also ask follow-up questions.)

"Here is a bunch of logs. Given that the symptom is that the system fails to process a message, what log messages jump out as suspicious for dropping a message?"

"Here is the code I want to test. <code> Here are the existing tests. <test code> What is one additional test you would add?"

"I am learning Elixir. Here is some code that fails to compile, here is the error message, can you walk me through what I did wrong?"

I haven't gotten much value out of "review this code", but maybe I'll have to try prompting for "persona: brief rude senior" as mentioned elsewhere.

  • 3.5 is doing a good job of reviewing code, even without prompting it to be brief and/or rude.

I've been using opencode pointing to the local model running llama.cpp.

The last thing I was having it build is a rust based app that essentially pulls data from a set of APIs every 2 minutes, processes it and stores the data in a local database, with a half hourly task that does further analysis. It has done a decent job.

It's definitely not as fast or as good as large online models, but it's fast enough and good enough, and using hardware I already had spare.