Comment by xanderlewis
1 year ago
> We don't fully understand why current LLMs are bad at these tasks.
Rather than asking why LLMs can’t do these tasks, maybe one should ask why we’d expect them to be able to in the first place? Do we fully understand why, for example, a cat can’t predict cellular automata? What would such an explanation look like?
I know there are some who will want to immediately jump in with scathing disagreement, but so far I’ve yet to see any solid evidence of LLMs being capable of reasoning. They can certainly do surprising and impressive things, but the kind of tasks you’re talking about require understanding, which, whilst obviously a very thorny thing to try and define, doesn’t seem to have much to do with how LLMs operate.
I don’t think we should be at all surprised that super-advanced autocorrect can’t exhibit intelligence, and we should spend our time building better systems rather than wondering why what we have now doesn’t work. It’ll be obvious in a few years (or perhaps decades) from now that we just had totally the wrong paradigm. It’s frankly bonkers to think you’re ever going to get a pure LLM to be able to do these kind of things with any degree of reliability just by feeding it yet more data or by ‘prompting it better’.
No comments yet
Contribute on Hacker News ↗