← Back to context

Comment by octoberfranklin

10 hours ago

"Claude Code and Codex are essentially AGI at this point"

Okaaaaaaay....

Just comes down to your own view of what AGI is, as it's not particularly well defined.

While a bit 'time-machiney' - I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. If someone wrote a definition of AGI 20 years ago, we would probably have met that.

We have certainly blasted past some science-fiction examples of AI like Agnes from The Twilight Zone, which 20 years ago looked a bit silly, and now looks like a remarkable prediction of LLMs.

By todays definition of AGI we haven't met it yet, but eventually it comes down to 'I know it if I see it' - the problem with this definition is that it is polluted by what people have already seen.

  • > most people would probably say AGI has been achieved

    Most people who took a look at a carefully crafted demo. I.e. the CEOs who keep pouring money down this hole.

    If you actually use it you'll realize it's a tool, and not a particularly dependable tool unless you want to code what amounts to the React tutorial.

  •   I think if you took an LLM of today and showed it to someone 20 years ago, most people would probably say AGI has been achieved. 
    

    I’ve got to disagree with this. All past pop-culture AI was sentient and self-motivated, it was human like in that it had it’s own goals and autonomy.

    Current AI is a transcript generator. It can do smart stuff but it has no goals, it just responds with text when you prompt it. It feels like magic, even compared to 4-5 years ago, but it doesn’t feel like what was classically understood as AI, certainly by the public.

    Somewhere marketers changed AGI to mean “does predefined tasks with human level accuracy” or the like. This is more like the definition of a good function approximator (how appropriate) instead of what people think (or thought) about when considering intelligence.

    • > Current AI is a transcript generator. It can do smart stuff but it has no goals

      That's probably not because of an inherent lack of capability, but because the companies that run AI products don't want to run autonomous intelligent systems like that

  • > If someone wrote a definition of AGI 20 years ago, we would probably have met that.

    No, as long as people can do work that a robot cannot do, we don't have AGI. That was always, if not the definition, at least implied by the definition.

    I don't know why the meme of AGI being not well defined has had such success over the past few years.

    • Completely disagree - Your definition (in my opinion) is more aligned to the concept of Artificial Super Intelligence.

      Surely the 'General Intelligence' definition has to be consistent between 'Artificial General Intelligence' and 'Human General Intelligence', and humans can be generally intelligent even if they can't solve calculus equations or protein folding problems. My definition of general intelligence is much lower than most - I think a dog is probably generally intelligent, although obviously in a different way (dogs are obviously better at learning how to run and catch a ball, and worse at programming python).

      1 reply →

  • Charles Stross published Accelerando in 2005.

    The book is a collection of nine short stories telling the tale of three generations of a family before, during, and after a technological singularity.

I want to know what the "intelligence explosion" is, sounds much cooler than AGI.

  • When AI gets so good it can improve on itself

    • Actually, this has already happened in a very literal way. Back in 2022, Google DeepMind used an AI called AlphaTensor to "play" a game where the goal was to find a faster way to multiply matrices, the fundamental math that powers all AI.

      To understand how big this is, you have to look at the numbers:

      The Naive Method: This is what most people learn in school. To multiply two 4x4 matrices, you need 64 multiplications.

      The Human Record (1969): For over 50 years, the "gold standard" was Strassen’s algorithm, which used a clever trick to get it down to 49 multiplications.

      The AI Discovery (2022): AlphaTensor beat the human record by finding a way to do it in just 47 steps.

      The real "intelligence explosion" feedback loop happened even more recently with AlphaEvolve (2025). While the 2022 discovery only worked for specific "finite field" math (mostly used in cryptography), AlphaEvolve used Gemini to find a shortcut (48 steps) that works for the standard complex numbers AI actually uses for training.

      Because matrix multiplication accounts for the vast majority of the work an AI does, Google used these AI-discovered shortcuts to optimize the kernels in Gemini itself.

      It’s a literal cycle: the AI found a way to rewrite its own fundamental math to be more efficient, which then makes the next generation of AI faster and cheaper to build.

      https://deepmind.google/blog/discovering-novel-algorithms-wi... https://www.reddit.com/r/singularity/comments/1knem3r/i_dont...

      1 reply →

I have noticed that Claude users seem to be about as intelligent as Claude itself, and wouldn't be able to surpass its output.

  • This made me laugh. Unfortunately, this is the world we live in. Most people who drive cars have no idea how they work, or how to fix them. And people who get on airplanes aren't able to flap their arms and fly.

    Which means that humans are reduced to a sort of uselessness / helplessness, using tools they don't understand.

    Overall, no one tells Uncle Bob that he doesn't deserve to fly home to Minnesota for Christmas because he didn't build the aircraft himself.

    But we all think it.