← Back to context

Comment by lostphilosopher

7 days ago

We don't really have a true test that means "if we pass this test we have AGI" but we have a variety of tests (like ARC) that we believe any true AGI would be able to pass. It's a "necessary but not sufficient" situation. Also ties directly to the challenge in defining what AGI really means. You see a lot of discussions of "moving the goal posts" around AGI, but as I see it we've never had goal posts, we've just got a bunch of lines we'd expect to cross before reaching them.

I don't think we actually even have a good definition of "This is what AGI is, and here are the stationary goal posts that, when these thresholds are met, then we will have AGI".

If you judged human intelligence by our AI standards, then would humans even pass as Natural General Intelligence? Human intelligence tests are constantly changing, being invalidated, and rerolled as well.

I maintain that today's modern LLMs would pass sufficiently for AGI and is also very close to passing a Turing Test, if measured in 1950 when the test was proposed.

  • >I don't think we actually even have a good definition of "This is what AGI is, and here are the stationary goal posts that, when these thresholds are met, then we will have AGI".

    Not only do we not have that, I don't think it's possible to have it.

    Philosophers have known about this problem for centuries. Wittgenstein recognized that most concepts don't have precise definitions but instead behave more like family resemblances. When we look at a family we recognize that they share physical characteristics, even if there's no single characteristic shared by all of them. They don't need to unanimously share hair color, skin complexion, mannerisms, etc. in order to have a family resemblance.

    Outside of a few well-defined things in logic and mathematics, concepts operate in the same way. Intelligence isn't a well-defined concept, but that doesn't mean we can't talk about different types of human intelligence, non-human animal intelligence, or machine intelligence in terms of family resemblances.

    Benchmarks are useful tools for assessing relative progress on well-defined tasks. But the decision of what counts as AGI will always come down to fuzzy comparisons and qualitative judgments.

  • The current definition and goal of AGI is “Artificial intelligence good enough to replace every employee for cheaper” and much of the difficulty people have in defining it is cognitive dissonance about the goal.

    • I’d remove the “for cheaper” part? (And also, only necessary for the employees whose jobs are “cognitive tasks”, not ones that are based on their bodies. So like, doesn’t need to be able to lift boxes or have a nice smile.)

      If something would be better at every cognitive task than every human, if it ran a trillion times faster, I would consider that to be AGI even if it isn’t that useful at its actual speed.

  • Because an important part of being a Natural general Intelligence is having a body and interacting with the world. Data from Star Trek is a good example of an AGI.

  • Turing test is not really that meaningful anymore because you can always detect the AI by text and timing patterns rather than actual intelligence. In fact the most reliable way to test for AI is probably to ask trivia questions on various niche topics, I don't think any human has as much breath of general knowledge as current AIs.

    • > you can always detect the AI by text and timing patterns

      I see no reason why an AI couldn't be trained on human data to fake all of that.

      If noone has bothered so far, that's because pretty much all commercial applications of this would be illegal or at least leading to major reputational damage when exposed.

      1 reply →

One of the very first slides of François’ presentation is about defining AGI. Do you have anything that opposes his synthesis of the two (50 years old) takes on this definition?

I have graduated with a degree in Software engineering and i am bilingual (Bulgarian and English). Currently AI is better than me in everything except adding big numbers or writing code in really niche topics - for example code golfing a Brainfuck interpreter or writing a Rubiks cube solver. I believe AGI has been here for at least a year now.

  • I suggest you to try to let the AI think through race conditions scenarios in asynchronous programs; it is not that good at these abstract reasoning tasks.

  • Can the AI wash your dishes, fold your laundry, take out your trash, meet a friend for dinner or the other thousand things you might do in an average day when you're not interacting with text on a screen?

    You know stuff that humans have done way before there were computers and screens.

    • Yeah, I'm convinced that the biggest difference between the current generation of AIs we have and humans is that AIs don't have the range of tool use and interaction with the physical environment that humans do. And that's what's actually holding AGI back not access to more data.

      1 reply →