← Back to context

Comment by dismalaf

7 months ago

The actual LLM part isn't much better than a year ago. What's better is that they've added additional logic and made it possible to intertwine traditional, expert-system style AI plus the power of the internet to augment LLMs so that they're actually useful.

This is an improvement for sure, but LLMs themselves are definitely hitting a wall. It was predicted that scaling alone would allow them to reach AGI level.

> It was predicted that scaling alone would allow them to reach AGI level.

This is a genuine attempt to inform myself. Could you think to those sort of claims from experts at the top?

  • There were definitely people "at the top" who were essentially arguing that more scale would get you to AGI - Ilya Sutskever of OpenAI comes to mind (e.g. "next-token prediction is enough for AGI").

    There were definitely many other prominent researchers who vehemently disagreed, e.g. Yann LeCun. But it's very hard for a layperson (or, for that matter, another expert) to determine who is or would be "right" in this situation - most of these people have strong personalities to put it mildly, and they often have vested interests in pushing their preferred approach and view of how AI does/should work.