Comment by osti 7 hours ago Somehow regresses on SWE bench? 5 comments osti Reply lkbm 7 hours ago I don't know how these benchmarks work (do you do a hundred runs? A thousand runs?), but 0.1% seems like noise. SubiculumCode 7 hours ago That benchmark is pretty saturated, tbh. A "regression" of such small magnitude could mean many different things or nothing at all. usaar333 7 hours ago i'd interpret that as rounding error. that is unchangedswe-bench seems really hard once you are above 80% Squarex 7 hours ago it's not a great benchmark anymore... starting with it being python / django primarily... the industry should move to something more representative usaar333 6 hours ago Openai has; they don't even mention score on gpt-5.3-codex.On the other hand, it is their own verified benchmark, which is telling.
lkbm 7 hours ago I don't know how these benchmarks work (do you do a hundred runs? A thousand runs?), but 0.1% seems like noise.
SubiculumCode 7 hours ago That benchmark is pretty saturated, tbh. A "regression" of such small magnitude could mean many different things or nothing at all.
usaar333 7 hours ago i'd interpret that as rounding error. that is unchangedswe-bench seems really hard once you are above 80% Squarex 7 hours ago it's not a great benchmark anymore... starting with it being python / django primarily... the industry should move to something more representative usaar333 6 hours ago Openai has; they don't even mention score on gpt-5.3-codex.On the other hand, it is their own verified benchmark, which is telling.
Squarex 7 hours ago it's not a great benchmark anymore... starting with it being python / django primarily... the industry should move to something more representative usaar333 6 hours ago Openai has; they don't even mention score on gpt-5.3-codex.On the other hand, it is their own verified benchmark, which is telling.
usaar333 6 hours ago Openai has; they don't even mention score on gpt-5.3-codex.On the other hand, it is their own verified benchmark, which is telling.
I don't know how these benchmarks work (do you do a hundred runs? A thousand runs?), but 0.1% seems like noise.
That benchmark is pretty saturated, tbh. A "regression" of such small magnitude could mean many different things or nothing at all.
i'd interpret that as rounding error. that is unchanged
swe-bench seems really hard once you are above 80%
it's not a great benchmark anymore... starting with it being python / django primarily... the industry should move to something more representative
Openai has; they don't even mention score on gpt-5.3-codex.
On the other hand, it is their own verified benchmark, which is telling.