← Back to context

Comment by super256

15 hours ago

I think you are messing up things here, and I think your comment is based on the article from semi analysis. [1]

It said: OpenAI’s leading researchers have not completed a successful full-scale pre-training run that was broadly deployed for a new frontier model since GPT-4o in May 2024, highlighting the significant technical hurdle that Google’s TPU fleet has managed to overcome.

However, pre-training run is the initial, from-scratch training of the base model. You say they only added routing and prompts, but that's not what the original article says. They most likely still have done a lot of fine tuning, RLHF, alignment and tool calling improvements. All that stuff is training too. And it is totally fine, just look at the great results they got with Codex-high.

If you got actually got what you said from a different source, please link it. I would like to read it. If you just messed things up, that's fine too.

[1] https://newsletter.semianalysis.com/p/tpuv7-google-takes-a-s...