← Back to context

Comment by dcchambers

9 months ago

Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.

LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"

If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.

How many people do you know who are capable of doing something truly novel? Definitely not me, I'm just an average phd doing average research.

  • I'm a lowly high school diploma holder. I thought the point of getting a PhD meant you had done something novel (your thesis).

    Is that wrong?

    • My phd thesis, just like 99% of other phd theses, does not have any “truly novel” ideas.

    • Just because it's something that no one has done yet, doesn't mean that it's not the obvious-to-everyone next step in a long, slow march.

  • AI manufacturers aren't comparing their models against most people; they now say its "smarter than 99% of people" or "performs tasks at a PhD level".

    Look, your argument ultimately reduces down to goalpost-moving what "novel" means, and you can position those goalposts anywhere you want depending on whether you want to push a pro-AI or anti-AI narrative. Is writing a paragraph that no one has ever written before "truly novel"? I can do that. AI can do that. Is inventing a new atomic element "truly novel"? I can't do that. Humans have done that. AI can't do that. See?

Isn't the human brain also "just" a big statistical model as far as we know? (very loosely speaking)