← Back to context

Comment by marinhero

10 days ago

Serious question but if it hallucinates about almost everything, what's the use case for it?

Fine-tuning for specific tasks. I'm hoping to see some good examples of that soon - the blog entry mentions things like structured text extraction, so maybe something like "turn this text about an event into an iCal document" might work?

  • Google helpfully made some docs on how to fine-tune this model [0]. I'm looking forward to giving it a try!

      [0]: https://ai.google.dev/gemma/docs/core/huggingface_text_full_finetune

  • Fine tuning messes with instruction following and RL'd behavior. I think this is mostly going to be useful for high volume pipelines doing some sort of mundane extraction or transformation.

I feel like the blog post, and GP comment, does a good job of explaining how it's built to be a small model easily fine tuned for narrow tasks, rather than used for general tasks out of the box. The latter is guaranteed to hallucinate heavily at this size, that doesn't mean every specific task it's fine tuned to would be. Some examples given were fine tuning it to efficiently and quickly route a query to the right place to actually be handled or tuning it to do sentiment analysis of content.

An easily fine tunable tiny model might actually be one of the better uses of local LLMs I've seen yet. Rather than try to be a small model that's great at everything it's a tiny model you can quickly tune to do one specific thing decently, extremely fast, and locally on pretty much anything.

It's funny. Which is subjective, but if it fits for you, it's arguably more useful than Claude.

Because that's not the job it was designed to do, and you would know by reading the article.