Comment by jamincan
4 days ago
For what it's worth, when I ask ChatGPT 5, it gets the correct answer every time. The response varies, but the answer is always three.
4 days ago
For what it's worth, when I ask ChatGPT 5, it gets the correct answer every time. The response varies, but the answer is always three.
Now try a different language. My take is hard RL tuning to fix these "gotcha:s" since the underlying model can't do it on its own.
OpenAI is working on ChatGPT the application and ecosystem. They have transitioned from model building to software engineering with RL tuning and integration of various services to solve the problems the model can't do on its own. Make it feel smart rather than be smart.
This means that as soon as you find a problem where you step out of the guided experience you get the raw model again which fails when encountering these "gotchas".
Edit - Here's an example where we see a very tuned RL experience in English where a whole load of context is added on how to solve the problem while the Swedish prompt for the same word fails.
https://imgur.com/a/SlD84Ih
You can tell it "be careful about the tokenizer issues" in Swedish and see how that changes the behavior.
The only thing that this stupid test demonstrates is that LLM metacognitive skills are still lacking. Which shouldn't be a surprise to anyone. The only surprising thing is that they have metacognitive skills, despite the base model training doing very little to encourage their development.
LLMs were not designed to count letters[0] since they work with tokens, so whatever trick they are now doing behind the scenes to handle this case, can probably only handle this particular case. I wonder if it's now included in the system prompt. I asked ChatGPT and it said it's now using len(str) and some other python scripts to do the counting, but who knows what's actually happening behind the scenes.
[0] https://arxiv.org/pdf/2502.16705
1 reply →