← Back to context

Comment by kenforthewin

6 hours ago

Looks like this comment is touching a nerve. This community is progressing from "AI can't write code", to "Well, AI can write code but it's not really about the code". I wonder where the goalposts will be moved next?

The community portion that unironically think AI is good enough now, are mostly managers and non/semi-technical people, and engineers who do not engage in critical or complex problems. HN has always been too much of the velocity-alignment-synergy class of professional talkers; it's just so much more obvious now that they feel emboldened in false confidence.

  • There's some of that, but more often it's developers whose arguments are a year behind the frontier models or, just as common, they're dramatically overstating their abilities.

    It's an inherent tension that every discipline has to wrestle with. The most experienced developers are in the best position to evaluate where LLMs are, but those who are the loudest about their own abilities generally aren't in this camp. Humility tends to come with experience, and arrogance tends to come with inexperience.

    • Conversely there's a massive amount of money being thrown around biased in favor of inflating what LLMs can do compared to humans.

  • I've been paying attention to that line of division and came to the same conclusion. We're seeing the "tiers" diverge as AI becomes more capable. As the saying goes, you don't know what you don't know.

  • Ah yes, the good old No True Scotsman fallacy, where we smugly proclaim that anyone who thinks AI is good now must be working on unimportant and easy things.

    • Ah yes, the (please copy & paste this message into an LLM to get it to generate a novel response in these parentheses)

Both your error and the OP's error is in imagining that the same people are saying both things. The "community" fallacy, which has been around for about 10 years now, that pretends that people with something in common (e.g. "uses HN") are somehow a community that thinks identically is completely wrong.

  • Actually, it's some of the same people. I won't name names, but there are a lot of AI skeptics on this site who loudly and prominently comment on every AI story. And if you look at their posting histories you'll see the exact type of goalpost-shifting the parent commenter is talking about.

    You see it elsewhere as well. There's now a cottage industry (with visible members like Ed Zitron) who have made a career out of creating and selling anti-AI content. At first they were complaining that AI lies constantly. As AI got better, they shifted to other talking points.

This community hasn't agreed on either of those things, just like it never agreed on good coding practices.

My opinion since college (8y ago) was that the best engineers are the ones who treat everything as halfway a people problem, even in low level code.

LLMs have been getting a lot better at coding.

If the "goalposts" represent what people generally think LLMs are capable of, they should be moving, right?

And complex, multi-part, long term efforts like building software and software companies always have numerous obstacles. When one is cleared, you wouldn't expect there to be no more, would you?

Your tone is complaining, but I just see people working in reality.

Is it even a problem that so called goal posts are moved?

That's life.

Life changes and us along with it.

"Who Moved My Cheese?"