Comment by philipwhiuk
5 hours ago
> When it comes to machine learning, research has consistently shown, that pretty much the only thing that matters is scaling.
Yes, indeed, that is why all we have done since the 90s is scale up the 'expert systems' we invented ...
That's such an a-historic take it's crazy.
* 1966: failure of machine translation
* 1969: criticism of perceptrons (early, single-layer artificial neural networks)
* 1971–75: DARPA's frustration with the Speech Understanding Research program at Carnegie Mellon University
* 1973: large decrease in AI research in the United Kingdom in response to the Lighthill report
* 1973–74: DARPA's cutbacks to academic AI research in general
* 1987: collapse of the LISP machine market
* 1988: cancellation of new spending on AI by the Strategic Computing Initiative
* 1990s: many expert systems were abandoned
* 1990s: end of the Fifth Generation computer project's original goals
Time and time again, we have seen that each academic research begets a degree of progress, improved by the application of hardware and money, but ultimately only a step towards AGI, which ends with a realisation that there's a missing congitive ability that can't be overcome by absurd compute.
LLMs are not the final step.
Well, expert systems aren’t machine learning, they’re symbolic. You mention perceptrons, but that timeline is proof for the power of scaling, not against — they didn’t start to really work until we built giant computers in the ~90s, and have been revolutionizing the field ever since.