Comment by tosh

9 hours ago

Terminal Bench 2.0

  | Name                | Score |
  |---------------------|-------|
  | OpenAI Codex 5.3    | 77.3  |
  | Anthropic Opus 4.6  | 65.4  |

yea but i feel like we are over the hill on benchmaxxing, many times a model has beaten anthropic on a specific bench, but the 'feel' is that it is still not as good at coding

  • When Anthropic beats Benchmarks its somehow earned, when OpenAi games it, its somehow about not feeling good at coding.

  • 'feel' is no more accurate

    not saying there's a better way but both suck

    • Speak for yourself. I've been insanely productive with Codex 5.2.

      With the right scaffolding these models are able to perform serious work at high quality levels.

      2 replies →

    • The variety of tasks they can do and will be asked to do is too wide and dissimilar, it will be very hard to have a transversal measurement, at most we will have area specific consensus that model X or Y is better, it is like saying one person is the best coder at everything, that does not exist.

      1 reply →

    • The 'feel' of a single person is pretty meaningless, but when many users form a consensus over time after a model is released, it feels a lot more informative than a simple benchmark because it can shift over time as people individually discover the strong and weak points of what they're using and get better at it.

    • At the end of the day “feel” is what people rely on to pick which tool they use.

      I’d feel unscientific and broken? Sure maybe why not.

      But at the end of the day I’m going to choose what I see with my own two eyes over a number in a table.

      Benchmarks are a sometimes useful to. But we are in prime Goodharts Law Territory.

      2 replies →

Benchmarks are useless compared to real world performance.

Real world performance for these models is a disappoint.