← Back to context

Comment by JSDave

9 days ago

AGI is when it can do all intellectual work that can be done by humans. It can improve its own intelligence and create a feedback loop because it is as smart as the humans who created it.

No, that is ASI. No human can do all intellectual work themselves. You have millions of different human models based on roughly the same architecture to do that.

When you have a single model that can do all you require, you are looking at something that can run billions of copies of itself and cause an intelligence explosion or an apocalypse.

  • "Artificial general intelligence (AGI) is a type of artificial intelligence that matches or surpasses human capabilities across virtually all cognitive tasks."

    • This is a statement that I've always found to be circular and poorly defined for the other reasons I've listed. Any technology that even gets close isn't AGI like I said, it's ASI for the reasons of duplication and time to train.

      It is also a line of thinking that will bite us in the ass if humans aren't as general of thinkers as we make ourselves out to be.

This has always been my personal definition of AGI. But the market and industry doesn't agree. So I've backed off on that and have more or less settled on "can do most of the knowledge work that a human can do"

Why the super-high bar? What's unsatisfying is that aren't the 'dumbest' humans still a general intelligence that we're nearly past, depending how you squint and measure?

It feels like an arbitrary bar to perhaps make sure we aren't putting AIs over humans, which they are most certainly in the superhuman category on a rapidly growing number of tasks.