Comment by wellpast

16 hours ago

> but the truth is that behind the volatility and public speculation, there has been a smooth, unyielding increase in AI’s cognitive capabilities.

> We are now at the point where AI models are … good enough at coding that some of the strongest engineers I’ve ever met are now handing over almost all their coding to AI.

Really?

All I’ve seen on HN the past few days are how slop prevails.

When I lean into agentic flows myself I’m at once amazed at how quickly it can prototype stuff but also how deficient and how much of a toy it all still seems.

What am I missing?

The disconnect is weird isn't it? The latest coding models can churn out a lot of mediocre code that more or less works if the task is sufficiently well specified, but it's not particularly good code, they have no real taste, no instinct for elegance or simplification, weak high level design. It's useful, but not anywhere near superhuman. It's also my impression that improvements in raw intelligence, far from increasing exponentially, are plateauing. The advances that people are excited about come from agentic patterns and tool use, but it's not much higher levels of intelligence, just slightly better intelligence run in a loop with feedback. Again that's useful but it's nowhere in the realms of "greater than Nobel winning across all domains".

Outside of coding, the top models still fall flat on their face when faced with relatively simple knowledge work. I got completely bogus info on a fairly simple tax question just a few days ago for example, and anyone using AI regularly with any discernment runs into simple failures like this all the time. It's still useful but the idea that we're on some trajectory to exceeding top human performance across all domains seems completely unrealistic when I look at my experience of how things have actually been progressing.