← Back to context

Comment by Vegenoid

8 days ago

I think we've actually had capable AIs for long enough now to see that this kind of exponential advance to AGI in 2 years is extremely unlikely. The AI we have today isn't radically different from the AI we had in 2023. They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors. They still fail at larger scope longer term tasks in mostly the same way, and they are still much worse at learning from small amounts of data than humans. Despite their ability to write decent code, we haven't seen the signs of a runaway singularity as some thought was likely.

I see people saying that these kinds of things are happening behind closed doors, but I haven't seen any convincing evidence of it, and there is enormous propensity for AI speculation to run rampant.

> there are some new capabilities that are big, but they are still fundamentally next-token predictors

Anthropic recently released research where they saw how when Claude attempted to compose poetry, it didn't simply predict token by token and "react" to when it thought it might need a rhyme and then looked at its context to think of something appropriate, but actually saw several tokens ahead and adjusted for where it'd likely end up, ahead of time.

Anthropic also says this adds to evidence seen elsewhere that language models seem to sometimes "plan ahead".

Please check out the section "Planning in poems" here; it's pretty interesting!

https://transformer-circuits.pub/2025/attribution-graphs/bio...

  • Isn't this just a form of next token prediction? i.e. you'll keep your options open for a potential rhyme if you select words that have many associated rhyming pairs, and you'll further keep your options open if you focus on broad topics over niche

    • Assuming the task remains just generating tokens, what sort of reasoning or planning would say is the threshold, before it's no longer "just a form of next token prediction?"

      5 replies →

    • I'm not sure if this is a meaningful distinction: Fundamentally you can describe the world as a "next token predictor". Just treat the world als a simulator with a time step of some quantum of time.

      That _probably_ won't capture everything, but for all practical purposes it's non-distinguishable from reality (yes, yes, time is not some constant everywhere)

    • Yeah, I'd agree that for that model (certainly not AGI) it's just an extension/refinement of next token prediction.

      But when we get a big aggregated of all of these little rules and quirks and improvements and subsystems for triggering different behaviours and processes - isn't that all humans are?

      I don't think it'll happen for a long ass time, but I'm not one of those individuals who, for some reason, desperately want to believe that humans are special, that we're some magical thing that's unexplainable or can't be recreated.

    • It doesn't really make explain it because then you'd expect lots of nonsensical lines trying to make a sentence that fits with the theme and rhymes at the same time.

    • recursive predestination. LLM's algorithms imply 'self-sabotage' in order to 'learn the strings' of 'the' origin.

  • LLM do exactly the same thing humans do: we read the text, raise flag, and flags on flags, on the various topics the text reminds us of, positive and negative, and then starts writing out a response that corresponds to those flags and likely attends to all of them. the planning ahead is just some flag that needs addressing, but it's learnt predictive behavior. nothing much to see here. experience gives you the flags. it's like applying massive pressure and diamonds will form.

METR [0] explicitly measures the progress on long term tasks; it's as steep a sigmoid as the other progress at the moment with no inflection yet.

As others have pointed out in other threads RLHF has progressed beyond next-token prediction and modern models are modeling concepts [1].

[0] https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...

[1] https://www.anthropic.com/news/tracing-thoughts-language-mod...

  • At the risk of coming off like a dolt and being super incorrect: I don't put much stock into these metrics when it comes to predicting AGI. Even if the trend of "length of task an AI can reliably do doubles every 7 months" continues, as they say that means we're years away from AI that can complete tasks that take humans weeks or months. I'm skeptical that the doubling trend will continue into that timescale, I think there is a qualitative difference between tasks that take weeks or months and tasks that take minutes or hours, a difference that is not reflected by simple quantity. I think many people responsible for hiring engineers are keenly aware of this distinction, because of their experience attempting to choose good engineers based on how they perform in task-driven technical interviews that last only hours.

    Intelligence as humans have it seems like a "know it when you see it" thing to me, and metrics that attempt to define and compare it will always be looking at only a narrow slice of the whole picture. To put it simply, the gut feeling I get based on my interactions with current AI, and how it is has developed over the past couple of years, is that AI is missing key elements of general intelligence at its core. While there's more lots more room for its current approaches to get better, I think there will be something different needed for AGI.

    I'm not an expert, just a human.

    • There is definitely something qualitatively different about weeks/months long tasks.

      It reminds me of the difference between a fresh college graduate and an engineer with 10 years of experience. There are many really smart and talented college graduates.

      But, while I am struggling to articulate exactly why, I know that when I was a fresh graduate, despite my talent and ambition, I would have failed miserably at delivering some of the projects that I now routinely deliver over time periods of ~1.5 years.

      I think LLM's are really good at emulating the types of things I might say are the types of things that would make someone successful at this if I were to write it down in a couple paragraphs, or an article, or maybe even a book.

      But... knowing those things as written by others just would not quite cut it. Learning at those time scales is just very different than what we're good at training LLM's to do.

      A college graduate is in many ways infinitely more capable than a LLM. Yet there are a great many tasks that you just can't give an intern if you want them to be successful.

      There are at least half a dozen different 1000-page manuals that one must reference to do a bare bones approach at my job. And there are dozens of different constituents, and many thousands of design parameters I must adhere to. Fundamentally, all of these things often are in conflict and it is my job to sort out the conflicts and come up with the best compromise. It's... really hard to do. Knowing what to bend so that other requirements may be kept rock solid, who to negotiate with for different compromises needed, which fights to fight, and what a "good" design looks like between alternatives that all seem to mostly meet the requirements. Its a very complicated chess game where it's hopelessly impossible to brute force but you must see the patterns along the way that will point you like sign posts into a good position in the end game.

      The way we currently train LLM's will not get us there.

      Until an LLM can take things in it's context window, assess them for importance, dismiss what doesn't work or turns out to be wrong, completely dismiss everything it knows when the right new paradigm comes up, and then permanently alter its decision making by incorporating all of that information in an intelligent way, it just won't be a replacment for a human being.

    • > I think there is a qualitative difference between tasks that take weeks or months and tasks that take minutes or hours, a difference that is not reflected by simple quantity.

      I'd label that difference as long-term planning plus executive function, and wherever that overlaps with or includes delegation.

      Most long-term projects are not done by a single human and so delegation almost always plays a big part. To delegate, tasks must be broken down in useful ways. To break down tasks a holistic model of the goal is needed where compartmentalization of components can be identified.

      I think a lot of those individual elements are within reach of current model architectures but they are likely out of distribution. How many gantt charts and project plans and project manager meetings are in the pretraining datasets? My guess is few; rarely published internal artifacts. Books and articles touch on the concepts but I think the models learn best from the raw data; they can probably tell you very well all of the steps of good project management because the descriptions are all over the place. The actual doing of it is farther toward the tail of the distribution.

  • The METR graph proposes a 6 year trend, based largely on 4 datapoints before 2024. I get that it is hard to do analyses since were in uncharted territory, and I personally find a lot of the AI stuff impressive, but this just doesn't strike me as great statistics.

    • I agree that we don't have any good statistical models for this. If AI development were that predictable we'd likely already be past a singularity of some sort or in a very long winter just by reverse-engineering what makes the statistical model tick.

> we haven't seen the signs of a runaway singularity as some thought was likely.

The signs are not there but while we may not be on an exponential curve (which would be difficult to see), we are definitely on a steep upward one which may get steeper or may fizzle out if LLM's can only reach human level 'intelligence' but not surpass it. Original article was a fun read though and 360,000 words shorter than my very similar fiction novel :-)

  • LLMs don’t have any sort of intelligence at present, they have a large corpus of data and can produce modified copies of it.

    • While certainly not human-level intelligence, I don't see how you could say they don't have any sort of it. There's clearly generalization there. What would you say is the threshold?

      5 replies →

    • Agree, the "intelligence" part is definitely the missing link in all this, however humans are smart cookies, and can see there's a gap, so I expect someone, (not necessarily a major player,) will eventually figure "it" out.

  • .. I would however add that the ending of my novel was far more exciting.

False. We got from ~0% on SWE-bench to 63%. It's a huge increase of capability in 2 years.

It's like saying that both a baby who can make a few steps and an adult have capability of "walking". It's just wrong.

> They are much better at the thing they are good at, and there are some new capabilities that are big, but they are still fundamentally next-token predictors.

I don't really get this. Are you saying autoregressive LLMs won't qualify as AGI, by definition? What about diffusion models, like Mercury? Does it really matter how inference is done if the result is the same?

  • > Are you saying autoregressive LLMs won't qualify as AGI, by definition?

    No, I am speculating that they will not reach capabilities that qualify them as AGI.

    • They will, we just need meatspace people to become dumber and more predictable. Making huge strides on that front, actually. (In no small part due to LLMs themselves, yeah.)

Isn't the brain kind of just a predictor as well, just a more complicated one? Instead of predicting and emitting tokens, we're predicting future outcomes and emitting muscle movements. Which is obviously different in a sense but I don't think you can write off the entire paradigm as a dead end just because the medium is different.

Disagree. We know it _can_ learn out of distribution capabilities based on similarities to other distributions. Like the TikZ Unicorn[1] (which was not in training data anywhere) or my code (which has variable names and methods/ideas probably not seen 1:1 in training).

IMO this out of distribution learning is all we need to scale to AGI. Sure there are still issues, it doesn't always know which distribution to pick from. Neither do we, hence car crashes.

[1]: https://arxiv.org/pdf/2303.12712 or on YT https://www.youtube.com/watch?v=qbIk7-JPB2c