Comment by ej88

9 hours ago

Most of the gains come from post-training RL, not pre-training (OpenAI's GPT 5.2 is using the same base model as 4o).

Also the article seems to be somewhat outdated. 'Model collapse' is not a real issue faced by frontier labs.

> OpenAI's GPT 5.2 is using the same base model as 4o

where’s that info from?

  • Not the parent, but the only other source of that claim I found was Dylan Patel's recent post from semianalysis.

A lot of the recent gains are from RL but also better inference during the prefill phase, and none of that will be impacted by data poisoning.

But if you want to keep the "base model" on the edge, you need to frequently retrain it on more recent data. Which is where data poisoning becomes interesting.

Model collapse is still a very real issue, but we know how to avoid it. People (non-professionals) who train their own LoRA for image generation (in a TTRPG context at least) still have the issue regularly.

In any case, it will make the data curation more expensive.