Comment by josephg
17 hours ago
> Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.
Yeah but, does that actually matter? Is that actually a reason to think LLMs won't be able to outpace humans at software development?
LLMs already deal with imperfect information in a stochastic world. They seem to keep getting better every year anyway.
This is like timing the stock market. Sure, share prices seem to go up over time, but we don't really know when they go up, down, and how long they stay at certain levels.
I don't buy the whole "LLMs will be magic in 6 months, look at how much they've progressed in the past 6 months". Maybe they will progress as fast, maybe they won't.
I’m not claiming I know the exact timing. I’m just seeing a trend line. Gpt3 to 3.5 to 4 to 5. Codex and now Claude. The models are getting better at programming much faster than I am. Their skill at programming doesn’t seem to be levelling out yet - at least not as far as I can see.
If this trend continues, the models will be better than me in less than a decade. Unless progress stops, but I don’t see any reason to think that would happen.