Comment by asadotzler
7 months ago
The improvements have less to do with scaling than adding new techniques like better fine tuning and reinforcement learning. The infinite scaling we were promised, that only required more content and more compute to reach god tier has indeed hit a wall.
I probably wasn't paying enough attention, but I don't remember that being the dominating claim that you're suggesting. Infinite scaling?
People were originally very surprised that you could get so much functionality by just pumping more data and adding more parameters to models. What made OpenAI initially so successful is that they were the first company willing to make big bets on these huge training runs.
After their success, I definitely saw a ton of blog posts and general "AI chatter" that to get to AGI all you really needed to do (obviously I'm simplifying things a bit here) was get more data and add more parameters, more "experts", etc. Heck, OpenAI had to scale back it's pronouncements (GPT 5 essentially became 4.5) when they found that they weren't getting the performance/functionality advances they expected after massively scaling up their model.