Comment by empthought
4 hours ago
I said I don’t think it follows, and you certainly gave no support for the idea that it must follow. Logically speaking, it’s possible for improvements to continue indefinitely in specific domains, and never come close to AGI.
Progress in LLMs will not slow down before they are better at programming than humans. Not “better than humans.” Better at programming. Just like computers are better than humans at a whole bunch of other things.
Computers have gotten steadily better at adding and multiplying and yet there is no AGI or expectation thereof as a result.
Either the AI can do better than humans at programming, or it can't. If I ask it to make an improved AI, or better tools for making an improved AI, and it can't do it, then at best it's matching human output.
All the current AI success is due to computers getting better at adding and multiplying. That's genuinely the core of how they work. The people who believe AGI is imminent believe the opposite of that last claim.