It's significantly accelerated to 4 months since the beginning of 2025, which puts 1 week within reach if things stay on trend. But yes 7 months is the more reliable long-term trend.
Can we attribute the acceleration to something specific, that might not actually continue growth? For example, agentic coding and reasoning models seem to have made a huge leap in abilities, but wouldn't translate to an ongoing exponential growth.
There's a fair amount of uncertainty on this point. In general it's unclear when/whether things will plateau out (although there are indications again that the trend is accelerating not decelerating).
That being said, if by "agentic coding" you are implying that a leap in capabilities is due to novel agentic frameworks/scaffolding that have appeared in 2025, I believe you are confusing cause and effect.
In particular, the agentic frameworks and scaffolding are by and large not responsible for the jump in capabilities. It is rather that the underlying models have improved sufficiently such that these frameworks and scaffolding work. None of the frameworks and scaffolding approaches of 2025 are new. All of them had been tried as early as 2023 (and indeed most of them had been tried in 2020 when GPT-3 came out). It's just that 2023-era models such as GPT-4 were far too weak to support them. Only in 2025 have models become sufficiently powerful to support these workflows.
Hence agentic frameworks and scaffolding are symptoms of ongoing exponential growth, not one-time boosts of growth.
Likewise reasoning models do not seem to be a one-time boost of growth. In particular reasoning models (or more accurate RLVR) seem to be an on-going source of new pretraining data (where the reasoning traces of models created during the process of RLVR serve as pretraining data for the next generation of models).
I remain uncertain, but I think there is a very real chance (>= 50%) that we are on an exponential curve that doesn't top out anytime soon (which gets really crazy really fast). If you want to do something about it, whether that's stopping the curve, flattening the curve, preparing yourself for the curve etc., you better do it now.
Predictions over historical data in a landscape with fragile priors doesn't seem like a strong metric to me (it's a useful approximation at best)
It's significantly accelerated to 4 months since the beginning of 2025, which puts 1 week within reach if things stay on trend. But yes 7 months is the more reliable long-term trend.
Can we attribute the acceleration to something specific, that might not actually continue growth? For example, agentic coding and reasoning models seem to have made a huge leap in abilities, but wouldn't translate to an ongoing exponential growth.
There's a fair amount of uncertainty on this point. In general it's unclear when/whether things will plateau out (although there are indications again that the trend is accelerating not decelerating).
That being said, if by "agentic coding" you are implying that a leap in capabilities is due to novel agentic frameworks/scaffolding that have appeared in 2025, I believe you are confusing cause and effect.
In particular, the agentic frameworks and scaffolding are by and large not responsible for the jump in capabilities. It is rather that the underlying models have improved sufficiently such that these frameworks and scaffolding work. None of the frameworks and scaffolding approaches of 2025 are new. All of them had been tried as early as 2023 (and indeed most of them had been tried in 2020 when GPT-3 came out). It's just that 2023-era models such as GPT-4 were far too weak to support them. Only in 2025 have models become sufficiently powerful to support these workflows.
Hence agentic frameworks and scaffolding are symptoms of ongoing exponential growth, not one-time boosts of growth.
Likewise reasoning models do not seem to be a one-time boost of growth. In particular reasoning models (or more accurate RLVR) seem to be an on-going source of new pretraining data (where the reasoning traces of models created during the process of RLVR serve as pretraining data for the next generation of models).
I remain uncertain, but I think there is a very real chance (>= 50%) that we are on an exponential curve that doesn't top out anytime soon (which gets really crazy really fast). If you want to do something about it, whether that's stopping the curve, flattening the curve, preparing yourself for the curve etc., you better do it now.
1 reply →