Comment by samtheprogram
2 days ago
This isn’t reasoning at all. It’s applying a well known algorithm to a problem. It literally says “classic” in its response.
2 days ago
This isn’t reasoning at all. It’s applying a well known algorithm to a problem. It literally says “classic” in its response.
It is “reasoning” in the same way that a calculator or compiler is reasoning. But I checked the solution, it’s actually wrong so it’s a moot point.
What will really bake your noodle is when you realize that just because the model's answer is wrong doesn't mean it didn't use reasoning to reach it.
Is your reasoning always perfect? No? Ever get partial credit on a test question in school? Yes? Well, maybe don't expect perfection from a model that didn't exist 5 years ago, that was considered impossible 10 years ago, and that would have gotten you burned as a witch 15 years ago.
Nobody claims that o3-pro is AGI, or even that it is going to lead up to AGI.
People say it all the time. There is a popular contingent which says that we will hit AGI very soon. Lead author came from Open AI.
https://ai-2027.com/
2 replies →