← Back to context

Comment by nl

12 hours ago

> It's also worth noting that OpenAI has not trained a new model since gpt4o (all subsequent models are routing systems and prompt chains built on top of 4), so the idea of OpenAI being stuck in some kind of runaway training expense is not real.

This isn't really accurate.

Firstly, GPT4.5 was a new training run, and it is unclear how many other failed training runs they did.

Secondly "all subsequent models are routing systems and prompt chains built on top of 4" is completely wrong. The models after gpt4o were all post-trained differently using reinforcement learning. That is a substantial expense.

Finally, it seems like GPT5.2 is a new training run - or at least the training cut off date is different. Even if they didn't do a full run it must have been a very large run.