It's stopped being cost-effective. Another order of magnitude of data centers? Not happening.
The business question is, what if AI works about as well as it does now for the next decade or so? No worse, maybe a little better in spots. What does the industry look like?
NVidia and TSMC are telling us that price/performance isn't improving through at least 2030. Hardware is not going to save us in the near term. Major improvement has to come from better approaches.
Sutskever: "I think stalling out will look like…it will all look very similar among all the different companies. It could be something like this. I’m not sure because I think even with stalling out, I think these companies could make a stupendous revenue. Maybe not profits because they will need to work hard to differentiate each other from themselves, but revenue definitely."
Somebody didn't get the memo that the age of free money at zero interest rates is over.
The "age of research" thing reminds me too much of mid-1980s AI at Stanford, when everybody was stuck, but they weren't willing to admit it. They were hoping, against hope, that someone would come up with a breakthrough that would make it work before the house of cards fell apart.
Except this time everything costs many orders of magnitude more to research. It's not like Sutskever is proposing that everybody should go back to academia and quietly try to come up with a new idea to get things un-stuck. They want to spend SSI's market cap of $32 billion on some vague ideas involving "generalization". Timescale? "5 to 20 years".
This is a strange way to do corporate R&D when you're kind of stuck. Lots of little and medium sized projects seem more promising, along the lines of Google X. The discussion here seems to lean in the direction of one big bet.
You have to admire them for thinking big. And even if the whole thing goes bust, they probably get to keep the house and the really nice microphone holder.
The ideas likely aren't vague at all given who is speaking. I'd bet they're extremely specific. Just not transparently shared with the public because it's intellectual property.
What kind of ideas would be intellectual property that was not shared? Isn't every part of LLMs, except the order of processes, publicly known ? Is there some magic algorithm previously unrevealed and held secret by a cabal of insiders?
A difference with mid-1980s AI is the hardware is way more capable now so even flawed algorithms can do quite economically significant stuff like Claude Code etc. Recent headline "Anthropic projects as much as $26 billion in annualized revenue in 2026". With that sort of revenue you'd expect some significant spend on R&D.
> "Anthropic projects as much as $26 billion in annualized revenue in 2026".
Anthropic projects a lot. It's hard to get actuals from Anthropic.[1]
They're privately held, so they don't have to report actuals publicly. [1] says "Anthropic has, through July 2025, made around $1.5 billion in revenue."
$26 billion for 2026 seems unlikely.
The translation is that SSI says that SSIs strategy is the way forward so could investors please stop giving OpenAI money and give SSI the money instead. SSI has not shown anything yet, nor does SSI intend to show anything until they have created an actual Machine God, but SSI says they can pull it off so it's all good to go ahead and wire the GDP of Norway directly to Ilya.
If we take AGI as a certainty, ie we think we can achieve AGI using silicon, then Ilya is one of the best bets you can take if you are looking to invest in this space. He has a history and he's motivated to continue working on this problem.
If you think that AGI is not possible to achieve, then you probably wouldn't be giving anyone money in this space.
This hinges on his company achieving AGI while he's still alive. He's 38 years old. He has about 4 decades to deliver AGI in his lifetime. When he dies, there is no guarantee whoever takes over will share his values.
"If you think that AGI is not possible to achieve, then you probably wouldn't be giving anyone money in this space." If you think other people think AGI is possible, you sell them shovels and ready yourself for a shovel market dip in the near future. Strike while the iron is hot.
Not really, but there is a finite amount of data to train models on. I found it rather interesting to hear him talk about how Gemini has been better at getting results out of the data than their competition, and how this is the first insights into a new way of dealing with how they train models on the same data to get different results.
I think the title is an interesting thing, because the scaling isn't about compute. At least as I understand it, what they're running out of is data, and one of the ways they deal with this, or may deal with this, is to have LLM's running concurrently and in competition. So you'll have thousands of models competing against eachother to solve challenges through different approaches. Which to me would suggest that the need for hardware scaling isn't about to stop.
I'll be convinced LLMs are a reasonable approach to AI when an LLM can give reasonable answers after being trained with approximately the same books and classes in school that I was once I completed my college education.
For the same reason anyone would: if an AI can reason to a human level after having been educated in a manner similar to a human then it is likely that we [the educators] have captured something akin to human intelligence.
It's stopped being cost-effective. Another order of magnitude of data centers? Not happening.
The business question is, what if AI works about as well as it does now for the next decade or so? No worse, maybe a little better in spots. What does the industry look like? NVidia and TSMC are telling us that price/performance isn't improving through at least 2030. Hardware is not going to save us in the near term. Major improvement has to come from better approaches.
Sutskever: "I think stalling out will look like…it will all look very similar among all the different companies. It could be something like this. I’m not sure because I think even with stalling out, I think these companies could make a stupendous revenue. Maybe not profits because they will need to work hard to differentiate each other from themselves, but revenue definitely."
Somebody didn't get the memo that the age of free money at zero interest rates is over.
The "age of research" thing reminds me too much of mid-1980s AI at Stanford, when everybody was stuck, but they weren't willing to admit it. They were hoping, against hope, that someone would come up with a breakthrough that would make it work before the house of cards fell apart.
Except this time everything costs many orders of magnitude more to research. It's not like Sutskever is proposing that everybody should go back to academia and quietly try to come up with a new idea to get things un-stuck. They want to spend SSI's market cap of $32 billion on some vague ideas involving "generalization". Timescale? "5 to 20 years".
This is a strange way to do corporate R&D when you're kind of stuck. Lots of little and medium sized projects seem more promising, along the lines of Google X. The discussion here seems to lean in the direction of one big bet.
You have to admire them for thinking big. And even if the whole thing goes bust, they probably get to keep the house and the really nice microphone holder.
The ideas likely aren't vague at all given who is speaking. I'd bet they're extremely specific. Just not transparently shared with the public because it's intellectual property.
What kind of ideas would be intellectual property that was not shared? Isn't every part of LLMs, except the order of processes, publicly known ? Is there some magic algorithm previously unrevealed and held secret by a cabal of insiders?
2 replies →
A difference with mid-1980s AI is the hardware is way more capable now so even flawed algorithms can do quite economically significant stuff like Claude Code etc. Recent headline "Anthropic projects as much as $26 billion in annualized revenue in 2026". With that sort of revenue you'd expect some significant spend on R&D.
> "Anthropic projects as much as $26 billion in annualized revenue in 2026".
Anthropic projects a lot. It's hard to get actuals from Anthropic.[1] They're privately held, so they don't have to report actuals publicly. [1] says "Anthropic has, through July 2025, made around $1.5 billion in revenue." $26 billion for 2026 seems unlikely.
This is revenue, not profit.
[1] https://www.wheresyoured.at/howmuchmoney/
The translation is that SSI says that SSIs strategy is the way forward so could investors please stop giving OpenAI money and give SSI the money instead. SSI has not shown anything yet, nor does SSI intend to show anything until they have created an actual Machine God, but SSI says they can pull it off so it's all good to go ahead and wire the GDP of Norway directly to Ilya.
If we take AGI as a certainty, ie we think we can achieve AGI using silicon, then Ilya is one of the best bets you can take if you are looking to invest in this space. He has a history and he's motivated to continue working on this problem.
If you think that AGI is not possible to achieve, then you probably wouldn't be giving anyone money in this space.
This hinges on his company achieving AGI while he's still alive. He's 38 years old. He has about 4 decades to deliver AGI in his lifetime. When he dies, there is no guarantee whoever takes over will share his values.
"If you think that AGI is not possible to achieve, then you probably wouldn't be giving anyone money in this space." If you think other people think AGI is possible, you sell them shovels and ready yourself for a shovel market dip in the near future. Strike while the iron is hot.
It’s a snake oil salesman’s world.
Are you asking whether the whole podcast can be boiled down to that translation, or whether you can infer/translate that from the title?
If the former, no. If the latter, sure, approximately.
Not really, but there is a finite amount of data to train models on. I found it rather interesting to hear him talk about how Gemini has been better at getting results out of the data than their competition, and how this is the first insights into a new way of dealing with how they train models on the same data to get different results.
I think the title is an interesting thing, because the scaling isn't about compute. At least as I understand it, what they're running out of is data, and one of the ways they deal with this, or may deal with this, is to have LLM's running concurrently and in competition. So you'll have thousands of models competing against eachother to solve challenges through different approaches. Which to me would suggest that the need for hardware scaling isn't about to stop.
I'll be convinced LLMs are a reasonable approach to AI when an LLM can give reasonable answers after being trained with approximately the same books and classes in school that I was once I completed my college education.
I'll be convinced cars are a reasonable approach to transportation when it can take me as far as a horse can on a bale of hay.
That is such a beautiful analogy that now I will read your other comments.
Why do you think this standard you're applying is reasonable or meaningful?
For the same reason anyone would: if an AI can reason to a human level after having been educated in a manner similar to a human then it is likely that we [the educators] have captured something akin to human intelligence.
The translation to me is: this cow has run out of milk. Now we actually need to deliver value, or the party stops.