← Back to context

Comment by greenfish6

5 hours ago

yea but i feel like we are over the hill on benchmaxxing, many times a model has beaten anthropic on a specific bench, but the 'feel' is that it is still not as good at coding

When Anthropic beats Benchmarks its somehow earned, when OpenAi games it, its somehow about not feeling good at coding.

'feel' is no more accurate

not saying there's a better way but both suck

  • The variety of tasks they can do and will be asked to do is too wide and dissimilar, it will be very hard to have a transversal measurement, at most we will have area specific consensus that model X or Y is better, it is like saying one person is the best coder at everything, that does not exist.

    • Yea, we're going to need benchmarks that incorporate series of steps of development for a particular language and how good each model is at it.

      Like can the model take your plan and ask the right questions where there appear to be holes.

      How wide of architecture and system design around your language does it understand.

      How does it choose to use algorithms available in the language or common libraries.

      How often does it hallucinate features/libraries that aren't there.

      How does it perform as context get larger.

      And that's for one particular language.

  • The 'feel' of a single person is pretty meaningless, but when many users form a consensus over time after a model is released, it feels a lot more informative than a simple benchmark because it can shift over time as people individually discover the strong and weak points of what they're using and get better at it.

  • At the end of the day “feel” is what people rely on to pick which tool they use.

    I’d feel unscientific and broken? Sure maybe why not.

    But at the end of the day I’m going to choose what I see with my own two eyes over a number in a table.

    Benchmarks are a sometimes useful to. But we are in prime Goodharts Law Territory.