← Back to context

Comment by GodelNumbering

8 days ago

Indeed. One could argue that the LLMs will keep on improving and they would be correct. But they would not improve in ways that make them a good independent agent safe for real world. Richard Sutton got a lot of disagreeing comments when he said on Dwarkesh Patel podcast that LLMs are not bitter-lesson (https://en.wikipedia.org/wiki/Bitter_lesson) pilled. I believe he is right. His argument being, any technique that relies on human generated data is bound to have limitations and issues that get harder and harder to maintain/scale over time (as opposed to bitter lesson pilled approaches that learn truly first hand from feedback)

I disagree with Sutton that a main issue is using human generated data. We humans are trained on that and we don't run into such issues.

I expect the problem is more structural to how the LLMs, and other ML approaches, actually work. Being disembodied algorithms trying to break all knowledge down to a complex web of probabilities, and assuming that anything predicting based only on those quantified data, seems hugely limiting and at odds with how human intelligence seems to work.

  • Sutton actually argues that we do not train on data, we train on experiences. We try things and see what works when/where and formulate views based on that. But I agree with your later point about training such a way is hugely limiting, a limit not faced by humans

[flagged]

  • Someone arguing that LLMs will keep improving may be putting too much weight behind expecting a trend to continue, but that wouldn't make them a gullible sucker.

    I'd argue that LLMs have gotten noticeably better at certain tasks every 6-12 months for the last few years. The idea that we are at the exact point where that trend stops and they get no better seems harder to believe.

    • One recent link on HN said that they double in quality every 7 months. (Kind of like Moore's Law.) I wouldn't expect that to go forever! I will admit that AI images aren't putting in 6 fingers, and AI code generation suddenly has gotten a lot better for me since I got access to Claude.

      I think we're at a point where the only thing we can reliably predict is that some kind of change will happen. (And that we'll laugh at the people who behave like AI is the 2nd coming of Jesus.)