← Back to context

Comment by simonw

10 days ago

This model is a LOT of fun. It's absolutely tiny - just a 241MB download - and screamingly fast, and hallucinates wildly about almost everything.

Here's one of dozens of results I got for "Generate an SVG of a pelican riding a bicycle". For this one it decided to write a poem:

  +-----------------------+
  |   Pelican Riding Bike |
  +-----------------------+
  |  This is the cat!  |
  |  He's got big wings and a happy tail.  |
  |  He loves to ride his bike!  |
  +-----------------------+
  |   Bike lights are shining bright.  |
  |   He's got a shiny top, too!  |
  |   He's ready for adventure!  |
  +-----------------------+

There are a bunch more attempts in this Gist, some of which do at least include an SVG tag albeit one that doesn't render anything: https://gist.github.com/simonw/25e7b7afd6a63a2f15db48b3a51ec...

I'm looking forward to seeing people fine-tune this in a way that produces useful output for selected tasks, which should absolutely be feasible.

I audibly laughed at this one: https://gist.github.com/simonw/25e7b7afd6a63a2f15db48b3a51ec... where it generates a… poem? Song? And then proceeds to explain how each line contributes to the SVG, concluding with:

> This SVG code provides a clear and visually appealing representation of a pelican riding a bicycle in a scenic landscape.

  • This reminds me of my interactions lately with ChatGPT where I gave into its repeated offer to draw me an electronics diagram. The result was absolute garbage. During the subsequent conversation it kept offering to include any new insights into the diagram, entirely oblivious to its own incompetence.

I see you are using ollamas ggufs. By default it will download Q4_0 quantization. Try `gemma3:270m-it-bf16` instead or you can also use unsloth ggufs `hf.co/unsloth/gemma-3-270m-it-GGUF:16`

You'll get better results.

He may generate useless tokens but boy can he generate ALOT of tokens.

> For this one it decided to write a poem

My first try:

user: "When was Julius Caesar born"

response: "Julius Caesar was born in **Rome**"

Beautiful :D

(I do not mean to detract from it - but it's just beautiful. It will require more effort to tame it.)

Serious question but if it hallucinates about almost everything, what's the use case for it?

  • Fine-tuning for specific tasks. I'm hoping to see some good examples of that soon - the blog entry mentions things like structured text extraction, so maybe something like "turn this text about an event into an iCal document" might work?

    • Google helpfully made some docs on how to fine-tune this model [0]. I'm looking forward to giving it a try!

        [0]: https://ai.google.dev/gemma/docs/core/huggingface_text_full_finetune

    • Fine tuning messes with instruction following and RL'd behavior. I think this is mostly going to be useful for high volume pipelines doing some sort of mundane extraction or transformation.

  • I feel like the blog post, and GP comment, does a good job of explaining how it's built to be a small model easily fine tuned for narrow tasks, rather than used for general tasks out of the box. The latter is guaranteed to hallucinate heavily at this size, that doesn't mean every specific task it's fine tuned to would be. Some examples given were fine tuning it to efficiently and quickly route a query to the right place to actually be handled or tuning it to do sentiment analysis of content.

    An easily fine tunable tiny model might actually be one of the better uses of local LLMs I've seen yet. Rather than try to be a small model that's great at everything it's a tiny model you can quickly tune to do one specific thing decently, extremely fast, and locally on pretty much anything.

  • It's funny. Which is subjective, but if it fits for you, it's arguably more useful than Claude.

  • Because that's not the job it was designed to do, and you would know by reading the article.

Could be interesting to use in a RAG setup and also finetuning it

For sure it won’t generate great svgs, but it might be a really good conversational model

  • The article says it's not a good conversational model but can be used for data extraction and classification as two examples.

> For this one it decided to write a poem

Could it be tamed with good role-system prompt crafting? (Besides fine-tuning.)

Do you take requests? We need to see how well this model works with some fine-tuning :D

the question is wheather you can make a fine tuned version and spam any given forum within an hour with the most attuned but garbage content.

i was looking at the demo and reading the bed time story it generated and even there, there was confusion about the sprite and the cat. switched subjects instantly making for a confusing paragraph. what's the point of this model?