Comment by yomismoaqui
10 days ago
Evaluating a 270M model on encyclopedic knowledge is like opening a heavily compressed JPG image and saying "it looks blocky"
10 days ago
Evaluating a 270M model on encyclopedic knowledge is like opening a heavily compressed JPG image and saying "it looks blocky"
What I read above is not an evaluation on “encyclopedic knowledge” though, it's a very basic a common sense: I wouldn't mind if the model didn't know the name of the biggest mountain on earth, but if the model cannot grasp the fact that the same mountain cannot simultaneously be #1, #2 and #3, then the model feels very dumb.
It gave you the tallest mountain every time. You kept asking it for various numbers of “tallest mountains” and each time it complied.
You asked it to enumerate several mountains by height, and it also complied.
It just didn’t understand that when you said the 6 tallest mountains that you didn’t mean the tallest mountain, 6 times.
When you used clearer phrasing it worked fine.
It’s 270m. It’s actually a puppy. Puppies can be trained to do cool tricks, bring your shoes, stuff like that.
> asking it for various numbers of “tallest mountains” and each time it complied
That's not what “second tallest” means thought, so this is a language model that doesn't understand natural language…
> You kept asking
Gemma 270m isn't the only one to have reading issues, as I'm not the person who conducted this experiment…
> You asked it to enumerate several mountains by height, and it also complied.
It didn't, it hallucinated a list of mountains (this isn't surprising though, as this is the kind of encyclopedic knowledge such a small model isn't supposed to be good at).
1 reply →
It does not work that way. The model does not "know". Here is a very nice explanation of what you are actually dealing with (hint: it's not a toddler-level intelligence): https://www.experimental-history.com/p/bag-of-words-have-mer...
even though I have heard of the bag of words before, this really struck on something I've been searching for
which could be understood by many to replace our current consensus (none)
It’s a language model? Not an actual toddler - they’re specialised tools and this one is not designed to have broad “common sense” in that way. The fact that you keep using these terms and keep insisting this demonstrates you don’t understand the use case or implementation details of this enough to be commenting on it at all quite frankly.
Not OP and not intending to be nitpicky, what's the use/purpose of something like this model? It can't do logic, it's too small to have much training data (retrievable "facts"), the context is tiny, etc
3 replies →
> they’re specialised tools and this one is not designed to have broad “common sense” in that way.
Except the key property of language models compared to other machine learning techniques is their ability to have this kind of common sense understanding of the meaning of natural language.
> you don’t understand the use case of this enough to be commenting on it at all quite frankly.
That's true that I don't understand the use-case for a language model that doesn't have a grasp of what first/second/third mean. Sub-1B models are supposed to be fine-tuned to be useful, but if the base model is so bad at language it can't make the difference between first and second and you need to put that in your fine-tuning as well as your business logic, why use a base model at all?
Also, this is a clear instance of moving the goalpost, as the comment I responded to was talking about how we should not expect such a small model to have “encyclopedic knowledge”, and now you are claiming we should not expect such a small language model to make sense of language…
5 replies →
Me: "List the second word in your comment reply"
You: "I'm sorry, I don't have an encyclopedia."
I'm starting to think you're 270M.