← Back to context

Comment by ein0p

2 years ago

I am in the field. The consensus is made up by a few loudmouths. No serious front line researcher I know believes we’re anywhere near AGI, or will be in the foreseeable future.

So the researchers at Deepmind, OpenAI, Anthropic, etc, are not "serious front line researchers"? Seems like a claim that is trivially falsified by just looking at what the staff at leading orgs believe.

  • Apparently not. Or maybe they are heavily incentivized by the hype cycle. I'll repeat one more time: none of the currently known approaches are going to get us to AGI. Some may end up being useful for it, but large chunks of what we think is needed (cognition, world model, ability to learn concepts from massive amounts of multimodal, primarily visual, and almost entirely unlabeled, input) are currently either nascent or missing entirely. Yann LeCun wrote a paper about this a couple of years ago, you should read it: https://openreview.net/pdf?id=BZ5a1r-kVsf. The state of the art has not changed since then.

    • I hope you have some advanced predictions about what capabilities the current paradigm would and would not successfully generate.

      Separately, it's very clear that LLMs have "world models" in most useful senses of the term. Ex: https://www.lesswrong.com/posts/nmxzr2zsjNtjaHh7x/actually-o...

      I don't give much credit to the claim that it's impossible for current approaches to get us to any specific type or level of capabilities. We're doing program search over a very wide space of programs; what that can result in is an empirical question about both the space of possible programs and the training procedure (including the data distribution). Unfortunately it's one where we don't have a good way of making advance predictions, rather than "try it and find out".

      3 replies →

    • LeCun has his own interests at heart, works for one of the most soulless corporations I know of, and devotes a significant amount of every paper he writes to citing himself.

      He is far from the best person to follow on this.

      2 replies →