← Back to context Comment by rubicon33 2 days ago But actual progress seems to be slower. These modes are releasing more often but aren’t big leaps. 4 comments rubicon33 Reply gallerdude 2 days ago We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better. wahnfrieden 2 days ago GPT 5.3 (/Codex) was a huge leap over 5.2 for coding rubicon33 2 days ago Eh, sure, but marginally better if not the same as Claude 4.6, which itself was a small bump over Claud 4.5 minimaxir 2 days ago Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.
gallerdude 2 days ago We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better.
wahnfrieden 2 days ago GPT 5.3 (/Codex) was a huge leap over 5.2 for coding rubicon33 2 days ago Eh, sure, but marginally better if not the same as Claude 4.6, which itself was a small bump over Claud 4.5
rubicon33 2 days ago Eh, sure, but marginally better if not the same as Claude 4.6, which itself was a small bump over Claud 4.5
minimaxir 2 days ago Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.
We used to get one annual release which was 2x as good, now we get quarterly releases which are 25% better. So annually, we’re now at 2.4x better.
GPT 5.3 (/Codex) was a huge leap over 5.2 for coding
Eh, sure, but marginally better if not the same as Claude 4.6, which itself was a small bump over Claud 4.5
Due to the increasing difficulty of scaling up training, it appears the gains are instead being achieved through better model training which appears to be working well for everyone.