Comment by espadrine
3 hours ago
Indeed. A mouse that runs through a maze may be right to say that it is constantly hitting a wall, yet it makes constant progress.
An example is citing Mr Sutskever's interview this way:
> in my 2022 “Deep learning is hitting a wall” evaluation of LLMs, which explicitly argued that the Kaplan scaling laws would eventually reach a point of diminishing returns (as Sutskever just did)
which is misleading, since Sutskever said it didn't hit a wall in 2022[0]:
> Up until 2020, from 2012 to 2020, it was the age of research. Now, from 2020 to 2025, it was the age of scaling
The larger point that Mr Marcus makes, though, is that the maze has no exit.
> there are many reasons to doubt that LLMs will ever deliver the rewards that many people expected.
That is something that most scientists disagree with. In fact the ongoing progress on LLMs has already accumulated tremendous utility which may already justify the investment.
[0]: https://garymarcus.substack.com/p/a-trillion-dollars-is-a-te...
No comments yet
Contribute on Hacker News ↗