Comment by simonw
10 days ago
This model is a LOT of fun. It's absolutely tiny - just a 241MB download - and screamingly fast, and hallucinates wildly about almost everything.
Here's one of dozens of results I got for "Generate an SVG of a pelican riding a bicycle". For this one it decided to write a poem:
+-----------------------+
| Pelican Riding Bike |
+-----------------------+
| This is the cat! |
| He's got big wings and a happy tail. |
| He loves to ride his bike! |
+-----------------------+
| Bike lights are shining bright. |
| He's got a shiny top, too! |
| He's ready for adventure! |
+-----------------------+
There are a bunch more attempts in this Gist, some of which do at least include an SVG tag albeit one that doesn't render anything: https://gist.github.com/simonw/25e7b7afd6a63a2f15db48b3a51ec...
I'm looking forward to seeing people fine-tune this in a way that produces useful output for selected tasks, which should absolutely be feasible.
I audibly laughed at this one: https://gist.github.com/simonw/25e7b7afd6a63a2f15db48b3a51ec... where it generates a… poem? Song? And then proceeds to explain how each line contributes to the SVG, concluding with:
> This SVG code provides a clear and visually appealing representation of a pelican riding a bicycle in a scenic landscape.
This reminds me of my interactions lately with ChatGPT where I gave into its repeated offer to draw me an electronics diagram. The result was absolute garbage. During the subsequent conversation it kept offering to include any new insights into the diagram, entirely oblivious to its own incompetence.
I see you are using ollamas ggufs. By default it will download Q4_0 quantization. Try `gemma3:270m-it-bf16` instead or you can also use unsloth ggufs `hf.co/unsloth/gemma-3-270m-it-GGUF:16`
You'll get better results.
Good call, I'm trying that one just now in LM Studio (by clicking "Use this model -> LM Studio" on https://huggingface.co/unsloth/gemma-3-270m-it-GGUF and selecting the F16 one).
(It did not do noticeably better at my pelican test).
Actually it's worse than that, several of my attempts resulted in infinite loops spitting out the same text. Maybe that GGUF is a bit broken?
Oh :( Maybe the settings? Could you try
temperature = 1.0, top_k = 64, top_p = 0.95, min_p = 0.0
4 replies →
I ran into the same looping issue with that model.
1 reply →
We uploaded gemma3:270m-it-q8_0 and gemma3:270m-it-fp16 late last night which have better results. The q4_0 is the QAT model, but we're still looking at it as there are some issues.
He may generate useless tokens but boy can he generate ALOT of tokens.
Can he draw an "alot" made of tokens? https://hyperboleandahalf.blogspot.com/2010/04/alot-is-bette...
He? I know some Gemmas and it's distinctly a female name; is Gemma a boy's name where you're from?
I don't really gender LLMs in my head in general. I guess Gemma is a female name. I only gendered it in the joke because I think it makes it funnier, especially since it's just "a little guy". I know they are giving gendered names to these models now but I think it's a bit weird to gender when interacting with them.
Doesn’t the “M” in “Gemma 3 270M” Stand for “male”?
Also: https://en.wikipedia.org/wiki/Gemma_Frisius
1 reply →
Perhaps the poster we referring to Simon not Gemma.
> ALOT
'Alot' is not a word. (I made this mistake a lot, too.)
> It's absolutely tiny - just a 241MB download
That still requires more than 170 floppy disks for installation.
Indeed. Requires over 3,000,000 punch cards to store. Not very tiny!
On the plus side, you can decompose your matrices for free using termites.
> For this one it decided to write a poem
My first try:
user: "When was Julius Caesar born"
response: "Julius Caesar was born in **Rome**"
Beautiful :D
(I do not mean to detract from it - but it's just beautiful. It will require more effort to tame it.)
Cutting number of parameters in half is like drinking a pint of beer.
I think you meant vodka.
Serious question but if it hallucinates about almost everything, what's the use case for it?
Fine-tuning for specific tasks. I'm hoping to see some good examples of that soon - the blog entry mentions things like structured text extraction, so maybe something like "turn this text about an event into an iCal document" might work?
Google helpfully made some docs on how to fine-tune this model [0]. I'm looking forward to giving it a try!
Fine tuning messes with instruction following and RL'd behavior. I think this is mostly going to be useful for high volume pipelines doing some sort of mundane extraction or transformation.
This is exactly the fine-tuning I am hoping for, or I would do if I had the skills. I tried it with gemma3 270M and vanilla it fails spectacularly.
Basically it would be the quickadd[1] event from google calendar, but calendar agnostic.
[1] https://developers.google.com/workspace/calendar/api/v3/refe...
It's intended for finetuning on your actual usecase, as the article shows.
I feel like the blog post, and GP comment, does a good job of explaining how it's built to be a small model easily fine tuned for narrow tasks, rather than used for general tasks out of the box. The latter is guaranteed to hallucinate heavily at this size, that doesn't mean every specific task it's fine tuned to would be. Some examples given were fine tuning it to efficiently and quickly route a query to the right place to actually be handled or tuning it to do sentiment analysis of content.
An easily fine tunable tiny model might actually be one of the better uses of local LLMs I've seen yet. Rather than try to be a small model that's great at everything it's a tiny model you can quickly tune to do one specific thing decently, extremely fast, and locally on pretty much anything.
It's funny. Which is subjective, but if it fits for you, it's arguably more useful than Claude.
Because that's not the job it was designed to do, and you would know by reading the article.
The same as having a goldfish. You can train it to do a trick I guess.
Games where you need NPCs to talk random jiberrish.
Nothing, just like pretty much all models you can run on consumer hardware.
This message brought to you by OpenAI: we're useless, but atleast theres a pay gate indicating quality!
robotic parrots?
An army of troll bots to shift the Overton Window?
oh no now we'll never hear the end of how LLMs are just statistical word generators
Could be interesting to use in a RAG setup and also finetuning it
For sure it won’t generate great svgs, but it might be a really good conversational model
The article says it's not a good conversational model but can be used for data extraction and classification as two examples.
> For this one it decided to write a poem
Could it be tamed with good role-system prompt crafting? (Besides fine-tuning.)
Do you take requests? We need to see how well this model works with some fine-tuning :D
It's gonna be a customer service agent for Sirius Cybernetics. Share and enjoy!
This is like a kobold to the other models' dragons and I don't hate it. :)
Finally we have a model that's just a tad bit sassy
the question is wheather you can make a fine tuned version and spam any given forum within an hour with the most attuned but garbage content.
i was looking at the demo and reading the bed time story it generated and even there, there was confusion about the sprite and the cat. switched subjects instantly making for a confusing paragraph. what's the point of this model?