← Back to context

Comment by jwr

13 hours ago

The speed is fine, the models are not.

I found only one great application of local LLMs: spam filtering. I wrote a "despammer" tool that accesses my mail server using IMAP, reads new messages, and uses an LLM to determine if they are spam or not. 95.6% correct classification rate on my (very difficult) test corpus, in practical usage it's nearly perfect. gpt-oss-20b is currently the best model for this.

For all other purposes models with <80B parameters are just too stupid to do anything useful for me. I write in Clojure and there is no boilerplate: the code reflects real business problems, so I need an LLM that is capable of understanding things. Claude Code, especially with Opus, does pretty well on simpler problems, all local models are just plain dumb and a waste of time compared to that, so I don't see the appeal yet.

That said, my next laptop will be a MacBook pro with M5 Max and 128GB of RAM, because the small LLMs are slowly getting better.