← Back to context

Comment by gertlabs

17 hours ago

Games is loosely defined here, as we run the bench across hundreds of unique environments. For some, the models write code to play a game, either one-shot or via a harness where they can iterate and use tools. Some they play directly, making a decision on each game tick. Some are real-time, giving the models a harness where they can write code handlers or submit decisions to interact with environments directly.

Coding is what we test for most heavily. Testing this via a game format (instead of correct/incorrect answers) allows us to score code objectively, scale to smarter models, and directly compare performance to other models. When we built the first iteration last year, I was surprised by how well it mapped to subjective experience with using models for coding. Games really are great for measuring intelligence.