Comment by oblio
2 days ago
> This pattern has already played out in chess and go. For a few years, a skilled Go player working in collaboration with a go AI could outcompete both computers and humans at go. But that era didn't last. Now computers can play Go at superhuman levels. Our skills are no longer required. I predict programming will follow the same trajectory.
Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.
Though geeks absolutely like raving about go and especially chess.
> Both of those are fixed, unchanging, closed, full information games. The real world is very much not that.
Yeah but, does that actually matter? Is that actually a reason to think LLMs won't be able to outpace humans at software development?
LLMs already deal with imperfect information in a stochastic world. They seem to keep getting better every year anyway.
This is like timing the stock market. Sure, share prices seem to go up over time, but we don't really know when they go up, down, and how long they stay at certain levels.
I don't buy the whole "LLMs will be magic in 6 months, look at how much they've progressed in the past 6 months". Maybe they will progress as fast, maybe they won't.
I’m not claiming I know the exact timing. I’m just seeing a trend line. Gpt3 to 3.5 to 4 to 5. Codex and now Claude. The models are getting better at programming much faster than I am. Their skill at programming doesn’t seem to be levelling out yet - at least not as far as I can see.
If this trend continues, the models will be better than me in less than a decade. Unless progress stops, but I don’t see any reason to think that would happen.