Comment by airylizard
20 hours ago
Right, the 4.1 training checkpoint hasn’t moved. What has moved is the glue on top: decoder heuristics / safety filters / logit-bias rules that OpenAI can hot-swap without re-training the model. Those “serving-layer” tweaks are what stomped the obvious em-dash miss for short, clean prompts. So the April-14 weights are unchanged, but the pipeline that samples from those weights is stricter about “don’t output X” than it was on day one. By all means, keep trying to poke holes! I’ve got nothing to sell; just sharing insights and happy to stress-test them.
No comments yet
Contribute on Hacker News ↗