← Back to context

Comment by cuuupid

13 days ago

I think the most important thing to note here, perhaps more so than the context window, is that this exposes some serious flaws in benchmarks. Per benchmarks, Maverick is competitive only with older models like GPT-4o or Gemini 2.0 Flash, and not with anything in the last few months (incl. reasoning models).

However, the LMArena head to head leaderboard ranks this as 2nd place overall: https://lmarena.ai/?leaderboard

This would indicate there is either a gap between user preference and model performance, or between model performance and whatever benchmarks assess.

Either way, it is surely a huge deal that an open source model is now outperforming GPT 4.5.

The benchmarks are awful. No disrespect to the people who worked to make them, nothing is easy. But I suggest going through them sometime. For example, I'm currently combing through the MMMU, MMMU-Pro, and MMStar datasets to build a better multimodal benchmark, and so far only about 70% of the questions have passed the sniff test. The other 30% make no sense, lead the question, or are too ambiguous. Of the 70%, I have to make minor edits to about a third of them.

Another example of how the benchmarks fail (specifically for vision, since I have less experience with the pure-text benchmarks): Almost all of the questions fall into either having the VLM read a chart/diagram/table and answer some question about it, or identify some basic property of an image. The former just tests the vision component's ability to do OCR, and then the LLM's intelligence. The latter are things like "Is this an oil painting or digital art?" and "Is the sheep in front of or behind the car" when the image is a clean shot of a sheep and a car. Absolutely nothing that tests a more deep and thorough understanding of the content of the images, nuances, or require the VLM to think intelligently about the visual content.

Also, due to the nature of benchmarks, it can be quite difficult to test how the models perform "in the wild." You can't really have free-form answers on benchmarks, so they tend to be highly constrained opting for either multiple choice quizzes or using various hacks to test if the LLM's answer lines up with ground truth. Multiple choice is significantly easier in general, raising the base pass rate. Also the distractors tend to be quite poorly chosen. Rather than representing traps or common mistakes, they are mostly chosen randomly and are thus often easy to weed out.

So there's really only a weak correlation between either of those metrics and real world performance.

There's absolutely a huge gap between user preference and model performanc that is widening by the minute. The more performant these models get, the more individual and syntactical preferences prevail.