Comment by dchuk

8 hours ago

I’m very bought in to the idea that raw coding is now a solved problem with the current models and agentic harnesses. Let alone what’s coming in the near term.

That being said, I think we’re in a weird phase right now where people’s obvious mental health issues are appearing as “hyper productivity” due to the use of these tools to absolutely spam out code that isn’t necessarily broadly coherent but is locally impressive. I’m watching multiple people both publicly and privately clearly breaking down mentally because of the “power” AI is bestowing on them. Their wires are completely crossed when it comes to the value of outputs vs outcomes and they’re espousing generated nonsense as it’s thoughtful insight.

It’s an interesting thing to watch play out.

Mm.

I'd agree, the code "isn’t necessarily broadly coherent but is locally impressive".

However, I've seen some totally successful, even award-winning, human-written projects where I could say the same.

Ages back, I heard a woodworking analogy:

  LLM code is like MDF. Really useful for cheap furniture, massively cheaper than solid wood, but it would be a mistake to use it as a structural element in a house.

Now, I've never made anything more complex than furniture, so I don't know how well that fit the previous models let alone the current ones… but I've absolutely seen success coming out of bigger balls of mud than the balls of mud I got from letting Claude loose for a bit without oversight.

Still, just because you can get success even with sloppy code, doesn't mean I think this is true everywhere. It's not like the award was for industrial equipment or anything, the closest I've come to life-critical code is helping to find and schedule video calls with GPs.

Well put. I can't help thinking of this every time I see the 854594th "agent coordination framework" in GitHub. They all look strangely similar, are obviously themselves vibe-coded, and make no real effort to present any type of evidence that they can help development in any way.

If you give every idiot a worldwide heard voice, you will hear every idiot from the whole world. If you give every idiot a tool to make programs, you will see a lot of programs made by idiots.

  • Steve Yegge is not an idiot or a bad programmer. Possibly just hypomanic at most. And a good, entertaining writer. https://en.wikipedia.org/wiki/Steve_Yegge

    Gas Town is ridiculous and I had to uninstall Beads after seeing it only confuse my agents, but he's not completely insane or a moron. There may be some kernels of good ideas inside of Gas Town which could be extracted out into a better system.

    • > Steve Yegge is not an idiot or a bad programmer.

      I don't think he's an idiot, there are almost no actual idiots here on HN in my opinion and they don't write such articles or make systems like Steve Yegge. I'm only commenting about giving more tools to idiots. Even tools made by geniuses will give you idiotic results when used by actual idiots, but a lot of smart people want to lower barriers of entry so that idiots can use more tools. And there are a lot of idiots who were inactive just because they didn't have the tools. Famous quote from a famous Polish essayist/futurist Stanisław Lem: "I didn't know there are so many idiots in this world until I got internet".

> raw coding is now a solved problem

Surely this was solved with fortran. What changed? I think most people just don't know what program they want.

  • You no longer have to be very specific about syntax. There's now an AI that can translate your idea into whatever language you want.

    Previously, if you had an idea of what the program needed to do, you needed to learn a new language. This is so hard that we use language itself as a metaphor: It's hard to learn a new language, only a few people can translate from French to English, for example. Likewise, few people can translate English to Fortran.

    Now, you can just think about your program in English, and so long as you actually know what you want, you can get a Fortran program.

    The issue is now what it was originally for senior programmers: to decide what to make, not how to make it.

    • The hard part of software development is equivalent to the hard part of engineering:

      Anyone can draw a sketch of what a house should look like. But designing a house that is safe, conforms to building regulations, and which wouldn't be uncomfortable to live in (for example, poor choice of heat insulation for the local climate) is the stuff people train on. Not the sketching part.

      It's the same for software development. All we've done is replace FORTRAN / Javascript / whatever with a subset of a natural language. But we still need to thoroughly understand the problem and describe it to the LLM. Plus the way we format these markdown prompts, you're basically still programming. Albeit in a less strict syntax and the "compiler" is non-deterministic.

      This is why I get so mythed by comments about AI replacing programmers. That's not what's happening. Programming is just shifting to a language that looks more like Jira tickets than source code. And the orgs that think they can replace developers with AI (and I don't for one second believe many of the technology leaders think this, but some smaller orgs likely do) are heading for a very unpleasant realisation soon.

      I will caveat this by saying: there are far too many naff developers out there that genuinely aren't any better than an LLM. And maybe what we need is more regulation around software development, just like there is in proper engineering professions.

      1 reply →

    • Again, I don't think most people are prepared to articulate what behavior they want. Fortran (and every other formal language) used to force this, but now you kusr kind of jerk off on the keyboard.

      Reactionarily? Sure. Maybe AI has some role to play there. Maybe you can ask the chatbot to modify settings.

> where people’s obvious mental health issues

I think the kids would call this "getting one-shotted by AI"

There is a lot of research on how words/language influences what we think, and even what we can observe, like the Sapir-Whorf hypothesis. If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

I have a suspicion that extensive use of LLMs can result in damage to your brain. That's why we are seeing so many mental health issues surfacing up, and we are getting a bunch of blog posts about "an agentic coding psychosis".

It could be that llms go from bicycles for the brain to smoking for the brain, once we figure out the long term effects of it.

  • > If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

    That is quite untrue. It is true that people may be slightly slower or less accurate in distinguishing colors that are within a labeled category than those that cross a category boundary, but that's far from saying they can't perceive the difference at all. The latter would imply that, for instance, English speakers cannot distinguish shades of blue or green.

  • > If in a langauge there is one word for 2 different colors, speakers of it are unable to see the difference between the colors.

    Perhaps you mean to say that speakers are unable to name the difference between the colours?

    I can easily see differences between (for example) different shades of red. But I can't name them other than "shade of red".

    I do happen to subscribe to the Sapir-Whorf hypothesis, in the sense that I think the language you think in constrains your thoughts - but I don't think it is strong enough to prevent you from being able to see different colours.