Comment by noch
1 month ago
> You seem to be assuming that the rapid progress in AI will suddenly stop.
> I think if you look at the history of compute, that is ridiculous. Making the models bigger or work more is making them smarter.
It's better to talk about actual numbers to characterise progress and measure scaling:
" By scaling I usually mean the specific empirical curve from the 2020 OAI paper. To stay on this curve requires large increases in training data of equivalent quality to what was used to derive the scaling relationships. "[^2]
"I predicted last summer: 70% chance we fall off the LLM scaling curve because of data limits, in the next step beyond GPT4.
[…]
I would say the most plausible reason is because in order to get, say, another 10x in training data, people have started to resort either to synthetic data, so training data that's actually made up by models, or to lower quality data."[^0]
“There were extraordinary returns over the last three or four years as the Scaling Laws were getting going,” Dr. Hassabis said. “But we are no longer getting the same progress.”[^1]
---
[^0]: https://x.com/hsu_steve/status/1868027803868045529
o1 proved that synthetic data and inference time is a new ramp. There will be more challenges and more innovations. There is a lot of room in hardware, software, model training and model architecture left.
> There is a lot of room in hardware, software, model training and model architecture left.
Quantify this please? And make a firm prediction with approximate numbers/costs attached?
It's not realistic to make firm quantified predictions any more specific than what I have given.
We will likely see between 3 and 10000 times improvement in efficiency or IQ or speed of LLM reasoning in the next 5 years.
5 replies →