Comment by Aurornis
3 days ago
I’ve experimented with several of the really small models. It’s impressive that they can produce anything at all, but in my experience the output is basically useless for anything of value.
3 days ago
I’ve experimented with several of the really small models. It’s impressive that they can produce anything at all, but in my experience the output is basically useless for anything of value.
Yes, I thought that too! But qwen3:0.6b (and to some extent gemma 1b) has made me reevaluate.
They still aren't useful like large LLMs, but for things like summarization, and other tasks where you can give them structure but want the sheen of natural language they are much better than things like the Phi series were.
That's interesting. For what projects would you want the "sheen of natural language" though?
Say I want to auto-bookmark a bunch of tabs and need a summary of each one. Using the title is a mechanical solution, but a nice prompt and a small model can summarize the title and contents into something much more useful.
qwen3 family, mostly 4B and 8B are absolutely amazing. the VL versions even more