Comment by dcchambers

9 months ago

And in 30 years it will be another 30 years away.

LLMs are so incredibly useful and powerful but they will NEVER be AGI. I actually wonder if the success of (and subsequent obsession with) LLMs is putting true AGI further out of reach. All that these AI companies see are the $$$. When the biggest "AI Research Labs" like OpenAI shifted to product-izing their LLM offerings I think the writing was on the wall that they don't actually care about finding AGI.

Got it. So this is now a competition between...

1. Fusion power plants 2. AGI 3. Quantum computers 4. Commercially viable cultured meat

May the best "imminent" fantasy tech win!

People over-estimate the short term and under-estimate the long term.

People will keep improving LLMs, and by the time they are AGI (less than 30 years), you will say, "Well, these are no longer LLMs."

  • Will LLMs approach something that appears to be AGI? Maybe. Probably. They're already "better" than humans in many use cases.

    LLMs/GPTs are essentially "just" statistical models. At this point the argument becomes more about philosophy than science. What is "intelligence?"

    If an LLM can do something truly novel with no human prompting, with no directive other than something it has created for itself - then I guess we can call that intelligence.

    • Isn't the human brain also "just" a big statistical model as far as we know? (very loosely speaking)

  • What the hell is general intelligence anyway? People seem to think it means human-like intelligence, but I can't imagine we have any good reason to believe that our kinds of intelligence constitute all possible kinds of intelligence--which, from the words, must be what "general" intelligence means.

    It seems like even if it's possible to achieve GI, artificial or otherwise, you'd never be able to know for sure that thats what you've done. It's not exactly "useful benchmark" material.

    • > What the hell is general intelligence anyway?

      OpenAI used to define it as "a highly autonomous system that outperforms humans at most economically valuable work."

      Now they used a Level 1-5 scale: https://briansolis.com/2024/08/ainsights-openai-defines-five...

      So we can say AGI is "AI that can do the work of Organizations":

      > These “Organizations” can manage and execute all functions of a business, surpassing traditional human-based operations in terms of efficiency and productivity. This stage represents the pinnacle of AI development, where AI can autonomously run complex organizational structures.

      5 replies →

    • The way some people confidently assert that we will never create AGI, I am convinced the term essentially means "machine with a soul" to them. It reeks of religiosity.

      I guess if we exclude those, then it just means the computer is really good at doing the kind of things which humans do by thinking. Or maybe it's when the computer is better at it than humans and merely being as good as the average human isn't enough (implying that average humans don't have natural general intelligence? Seems weird.)

      1 reply →

    • >you'd never be able to know for sure that thats what you've done.

      Words mean what they're defined to mean. Talking about "general intelligence" without a clear definition is just woo, muddy thinking that achieves nothing. A fundamental tenet of the scientific method is that only testable claims are meaningful claims.

  • Looking back at CUDA, deep learning, and now LLM hypes, I would bet it'll be cycles of giant groundbreaking leaps followed by giant complete stagnations, rather than LLM improving 3% per year for coming 30 years.

  • They‘ll get cheaper and less hardware demanding but the quality improvements get smaller and smaller, sometimes hardly noticeable outside benchmarks

  • What was the point of this comment? It's confrontational and doesn't add anything to the conversation. If you disagree, you could have just said that, or not commented at all.

    • There's been a complaint for several decades that "AI can never succeed" - because when, say, expert systems are developed from AI research, and they become capable of doing useful things, then the nay-sayer say "That's not AI, that's just expert systems".

      This is somewhat defensible, because what the non-AI-researcher means by AI - which may be AGI - is something more than expert systems by themselves can deliver. It is possible that "real AI" will be the combination of multiple approaches, but so far all the reductionist approaches (that expert systems, say, are all that it takes to be an AI) have proven to be inadequate compared to what the expectations are.

      The GP may have been riffing off of this "that's not AI" issue that goes way back.

    • The people who go around saying "LLMs aren't intelligent" while refusing to define exactly what they mean by intelligence (and hence not making a meaningful/testable claim) add nothing to the conversation.

      2 replies →