← Back to context

Comment by gunalx

21 hours ago

If i read it right it used multiple samples of itself to verify the aqccuracy, but isnt this problematic?

Problematic in that it's still not formal verification, not problematic as in "it's worse to do this than not".

In what way? Panel of experts approach has been a thing for a while now and it's documented to improve quality.

  • Well problematic because they are using their own verifier as apanem of experts, with their own model trained specifically to satisfy this verifier. On the benchmark runs, they dont mention using human experts to cross validate their scores.

    • I assume they use self-verification only during RL training to provide the reward signal, but not for benchmarks.