At the moment, the only way you can tell if the model is good for a particular task is by trying it at that task. Gut feel is how you pick the models to test first, and that is also based largely on past experience and educated guesses as to what strengths translate between tasks.
You should also remember that there's no free lunch. If you see models below a certain size fail consistently, don't expect a model that is even smaller to somehow magically succeed, no matter how much pixie dust the developer advertises.
If you asked "What's the best bicycle", most enthusiasts would say one you tried, works for your usecase, etc.
Benchmarks should be for pruning models you try at the absolute highest level, because at the end of the day it's way too easy to hack them without breaking any rules (post-train on the public, generate a ton of synthetic examples, train on those, repeat)
At the moment, the only way you can tell if the model is good for a particular task is by trying it at that task. Gut feel is how you pick the models to test first, and that is also based largely on past experience and educated guesses as to what strengths translate between tasks.
You should also remember that there's no free lunch. If you see models below a certain size fail consistently, don't expect a model that is even smaller to somehow magically succeed, no matter how much pixie dust the developer advertises.
To some extent there must be a free lunch, because today's 30B models are enormously better than the 30B models that existed a year ago.
I suppose it's an open question whether there is another free lunch or whether the 30B models in a year will be not much better than our current ones.
it currently beats depending on the benchmarks
I mean, in other environments people say that.
If you asked "What's the best bicycle", most enthusiasts would say one you tried, works for your usecase, etc.
Benchmarks should be for pruning models you try at the absolute highest level, because at the end of the day it's way too easy to hack them without breaking any rules (post-train on the public, generate a ton of synthetic examples, train on those, repeat)