← Back to context

Comment by johnfn

18 hours ago

It’s insane to me how yesterday someone posted an example of ChatGPT Pro one-shotting an Erdos problem after 90 minutes of thinking and today you’re saying that AGI is a fairy tale.

It's not one-shot. Other people had attempted the same problem w/ the same AI & failed. You're confused about terms so you redefine them to make your version of the fairy tale real.

  • We already know that same problem has been examined by many credible mathematicians already and couldn't be solved by any of them yet.

    Why are we expecting AGI to one shot it? Can't we have an AGI that can fails occasionally to solve some math problem? Is the expectation of AGI to be all knowing?

    By the way I agree that AGI is not around the corner or I am not arguing any of the llm s are "thinking machines". It's just I agree goal post or posts needs to be set well.

    • People want to believe in magic so they will find excuses to do so. Computers have been proving theorems for a long time now but Isabelle/HOL didn't have the marketing budget of OpenAI so people didn't care. Now that Sam Altman is doing the marketing people all of a sudden care about proving theorems.

      5 replies →