Comment by NitpickLawyer
19 hours ago
> gpt-5.2 did ~2x better than gpt-5.2-codex.. why?
Optimising a model for a certain task, via fine-tuning (aka post-training), can lead to loss of performance on other tasks. People want codex to "generate code" and "drive agents" and so on. So oAI fine-tuned for that.
No comments yet
Contribute on Hacker News ↗