← Back to context

Comment by menaerus

6 months ago

Thanks for the link. A holdout set which is yet to be used to verify the 25% claim. He also says that he doesn't believe that OpenAI would self-sabotage themselves by tricking the internal benchmarking performance since this will get easily exposed, either by the results from a holdout set or by the public repeating the benchmarks themselves. Seems reasonable to me.

>the public repeating the benchmarks themselves

The public has no access to this benchmark.

In fact, everyone thought it was all locked up in a vault at Epoch AI HQ, but looks like Sam Altman has a copy on his bedside table.

  • Perhaps what he meant is that the public will be able to benchmark the model themselves by throwing different difficulty math problems at it and not necessarily the FrontierMath benchmark. It should become pretty obvious if they were faking the results or not.

    • It's been found [0] that slightly varying Putnam problems causes a 30% drop in o1-Preview accuracy, but that hasn't put a dent in OAI's hype.

      There's absolutely no comeuppance for juicing benchmarks, especially ones no one has access to. If performance of o3 doesn't meet expectations, there'll be plenty of people making excuses for it ("You're prompting it wrong!", "That's just not its domain!").

      [0] https://openreview.net/forum?id=YXnwlZe0yf&noteId=yrsGpHd0Sf

      1 reply →