Comment by nitwit005
2 days ago
If you imagine it just keeps improving, the end point would be some sort of AGI though. Logically, once you have something better at making software than humans, you can ask it to make a better AI than we were able to make.
I don’t think that follows, nor do I think it will keep improving indefinitely. It will certainly continue to improve for a while.
We don’t need anything close to AGI to render the job “software engineer” as we know it today completely obsolete. Ever hear of a lorimer?
If it doesn't follow, why not?
The other possibility is, as you say, progress slows down before its better than humans. But then how is it replacing them? How does a worse horse replace horses?
I said I don’t think it follows, and you certainly gave no support for the idea that it must follow. Logically speaking, it’s possible for improvements to continue indefinitely in specific domains, and never come close to AGI.
Progress in LLMs will not slow down before they are better at programming than humans. Not “better than humans.” Better at programming. Just like computers are better than humans at a whole bunch of other things.
Computers have gotten steadily better at adding and multiplying and yet there is no AGI or expectation thereof as a result.