Comment by MPSimmons
7 days ago
I don't think we actually even have a good definition of "This is what AGI is, and here are the stationary goal posts that, when these thresholds are met, then we will have AGI".
If you judged human intelligence by our AI standards, then would humans even pass as Natural General Intelligence? Human intelligence tests are constantly changing, being invalidated, and rerolled as well.
I maintain that today's modern LLMs would pass sufficiently for AGI and is also very close to passing a Turing Test, if measured in 1950 when the test was proposed.
>I don't think we actually even have a good definition of "This is what AGI is, and here are the stationary goal posts that, when these thresholds are met, then we will have AGI".
Not only do we not have that, I don't think it's possible to have it.
Philosophers have known about this problem for centuries. Wittgenstein recognized that most concepts don't have precise definitions but instead behave more like family resemblances. When we look at a family we recognize that they share physical characteristics, even if there's no single characteristic shared by all of them. They don't need to unanimously share hair color, skin complexion, mannerisms, etc. in order to have a family resemblance.
Outside of a few well-defined things in logic and mathematics, concepts operate in the same way. Intelligence isn't a well-defined concept, but that doesn't mean we can't talk about different types of human intelligence, non-human animal intelligence, or machine intelligence in terms of family resemblances.
Benchmarks are useful tools for assessing relative progress on well-defined tasks. But the decision of what counts as AGI will always come down to fuzzy comparisons and qualitative judgments.
The current definition and goal of AGI is “Artificial intelligence good enough to replace every employee for cheaper” and much of the difficulty people have in defining it is cognitive dissonance about the goal.
Or this definition of AGI from OpenAI and Microsoft:
> [AGI is achieved when] AI systems that can generate at least $100 billion in profits.
https://techcrunch.com/2024/12/26/microsoft-and-openai-have-...
That is a truly retarded definition. And so revealing. Just perfect quant slop.
I’d remove the “for cheaper” part? (And also, only necessary for the employees whose jobs are “cognitive tasks”, not ones that are based on their bodies. So like, doesn’t need to be able to lift boxes or have a nice smile.)
If something would be better at every cognitive task than every human, if it ran a trillion times faster, I would consider that to be AGI even if it isn’t that useful at its actual speed.
Because an important part of being a Natural general Intelligence is having a body and interacting with the world. Data from Star Trek is a good example of an AGI.
Given the actions of Data's brother, I think Data qualifies as a benevolent ASI.
Turing test is not really that meaningful anymore because you can always detect the AI by text and timing patterns rather than actual intelligence. In fact the most reliable way to test for AI is probably to ask trivia questions on various niche topics, I don't think any human has as much breath of general knowledge as current AIs.
> you can always detect the AI by text and timing patterns
I see no reason why an AI couldn't be trained on human data to fake all of that.
If noone has bothered so far, that's because pretty much all commercial applications of this would be illegal or at least leading to major reputational damage when exposed.
You may want to look at this: A foundation model to predict and capture human cognition
https://www.nature.com/articles/s41586-025-09215-4