← Back to context

Comment by CamperBob2

2 days ago

What will really bake your noodle is when you realize that just because the model's answer is wrong doesn't mean it didn't use reasoning to reach it.

Is your reasoning always perfect? No? Ever get partial credit on a test question in school? Yes? Well, maybe don't expect perfection from a model that didn't exist 5 years ago, that was considered impossible 10 years ago, and that would have gotten you burned as a witch 15 years ago.

Nobody claims that o3-pro is AGI, or even that it is going to lead up to AGI.

People say it all the time. There is a popular contingent which says that we will hit AGI very soon. Lead author came from Open AI.

https://ai-2027.com/

  • Being able to manually write out hundreds of steps of the Towers of Hanoi problem is not a requirement for AGI, in much the same way that being able to manually multiply 50 digit numbers is not a requirement to be a successful mathematician.