Comment by novok
13 hours ago
Oh man you just gave me an idea to use something like qwen 3.5 to categorize a lot of emails. You can keep the context small, do it per email and just churn through a lot of crap.
13 hours ago
Oh man you just gave me an idea to use something like qwen 3.5 to categorize a lot of emails. You can keep the context small, do it per email and just churn through a lot of crap.
The 0.8B can do this pretty well.
Actually pg's original "A plan for spam" explains how to do this with a Bayesian classifier.
I've been learning to apply these lately and it has been pretty eye opening. Combined with Fourier analysis (for example) you can do what seems kind of like magic, in my opinion. But it has been possible since long before LLMs showed up.
Totally different categories and different use cases, but the more I learn about LLMs the more I discover there's a powerful, determinsitic, well-established statistical model or two to do the same thing.
Really, LLMs are kind of like convenient, wildly inefficient proxies for useful processes. But I'm not convinced they should often end up as permanent fixtures of logical pipelines. Unless you're making a chat bot, I guess.
I was just chatting with a co-worker that wanted to run a LLM locally to classify a bunch of text. He was worried about spending too many tokens though.
I asked him why he didn't just have the LLM build him a python ML library based classifier instead.
The LLMs are great but you can also build supporting tools so that:
- you use fewer tokens
- it's deterministic
- you as the human can also use the tools
- it's faster b/c the LLM isn't "shamboozling" every time you need to do the same task.
you can use 4B for that, its quite good