Comment by empthought
18 hours ago
I don’t think that follows, nor do I think it will keep improving indefinitely. It will certainly continue to improve for a while.
We don’t need anything close to AGI to render the job “software engineer” as we know it today completely obsolete. Ever hear of a lorimer?
If it doesn't follow, why not?
The other possibility is, as you say, progress slows down before its better than humans. But then how is it replacing them? How does a worse horse replace horses?
I said I don’t think it follows, and you certainly gave no support for the idea that it must follow. Logically speaking, it’s possible for improvements to continue indefinitely in specific domains, and never come close to AGI.
Progress in LLMs will not slow down before they are better at programming than humans. Not “better than humans.” Better at programming. Just like computers are better than humans at a whole bunch of other things.
Computers have gotten steadily better at adding and multiplying and yet there is no AGI or expectation thereof as a result.
Either the AI can do better than humans at programming, or it can't. If I ask it to make an improved AI, or better tools for making an improved AI, and it can't do it, then at best it's matching human output.
All the current AI success is due to computers getting better at adding and multiplying. That's genuinely the core of how they work. The people who believe AGI is imminent believe the opposite of that last claim.