← Back to context

Comment by XCSme

17 hours ago

Why not? I described this in more detail in other comments.

Even when using structured output, sometimes you want to define how the data should be displayed or formatted, especially for cases like chat bots, article writing, tool usage, calling external api's, parsing documents, etc.

Most models get this right. Also, this is just one failure mode of Claude.

Like I said in the edit, when people want specific formatting they ask for well known formats: Markdown, XML, JSON

I don't even need to debate if the benchmark is useful, it doesn't pass a sniff test: GPT-5.4 is not worse than Gemini 2.5 Flash in any way that matters to most users. In your benchmark it's meaningfully worse.

  • The questions do ask specifically to respond with the answer only, with an example format given in many cases.

    Note that all reasoning models are tested with "medium" reasoning.

    The benchmarks are questions/data processing tasks that an average user will likely ask, not coding questions (I didn't add any coding tests yet).

    Gemini models also tend to be very consistent. Asking the same question will likely give the same result.

    The two models you mention scored the same, the only difference is that Gemini was better at domain-specific questions (i.e. you ask something quite technical/niche).