Comment by monero-xmr

1 day ago

First you must accept that engineering elegance != market value. Only certain applications and business models need the crème de le crème of engineers.

LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.

It's not just about elegance.

I'm going to give an example of a software with multiple processes.

Humans can imagine scenarios where a process can break. Claude can also do it, but only when the breakage happens from inside the process and if you specify it. It can not identify future issues from a separate process unless you specifically describe that external process, the fact that it could interact with our original process and the ways in which it can interact.

Identifying these are the skills of a developer, you could say you can document all these cases and let the agent do the coding. But here's the kicker, you only get to know these issues once you started coding them by hand. You go through the variables and function calls and suddenly remember a process elsewhere changes or depends on these values.

Unit tests could catch them in a decently architected system, but those tests needs to be defined by the one coding it. Also if the architect himself is using AI, because why not, it's doomed from the start.

  • So, your point is that programmers identify the unexpected edge cases through the act of taking their time writing the code by hand. From my experience, it takes a proficient developer to actually plan their code around future issues from separate processes.

    I think that it's mistaken to think that reasoning while writing the code is at all a good way to truly understand what your code is doing. (Without implying that you shouldn't write it by hand or reason about it.) You need to debug and test it thoroughly either way, and basically be as sceptical of your own output as you'd be of any other person's output.

    Thinking that writing the code makes you understand it better can cause more issues than thinking that even if you write the code, you don't really know what it's doing. You are merely typing out the code based on what you think it should be doing, and reasoning against that hypothesis. Of course, you can be better or worse at constructing the correct mental model from the get go, and keep updating it in the right direction while writing the code. But it's a slippery slope, because it can also go the other way around.

    A lot of bugs that take unreasonably long for junior-mid level engineers to find, seem to happen because: They trust their own mental model of the code too much without verifying it thoroughly, create a hypothesis for the bug in their own head without verifying it thoroughly, then get lost trying to reason about a made up version of whatever is causing the bug only to come to the conclusion that their original hypothesis was completely wrong.

    • > From my experience, it takes a proficient developer to actually plan their code around future issues from separate processes.

      And it takes even more experience to know when not to spend time on that.

      Way too many codebases are optimised to 1M DAU and see like 100 users for the first year. All that time optimising and handling edge cases could've been spent on delivering features that bring in more users and thus more money.

      1 reply →

I keep hearing this but I don’t understand. If inelegant code means more bugs that are harder to fix later, that translates into negative business value. You won’t see it right away which is probably where this sentiment is coming from, but it will absolutely catch up to you.

Elegant code isn’t just for looks. It’s code that can still adapt weeks, months, years after it has shipped and created “business value”.

  • It's a trade-off. The gnarly thing is that you're trading immediate benefits for higher maintenance costs and decreased reliability over time, which makes it a tempting one to keep taking. Sure, there will be negative business value, but later, and right now you can look good by landing the features quicker. It's FAFO with potentially many reporting quarters between the FA and the FO.

    This trade-off predates LLMs by decades. I've been fortunate to have a good and fruitful career being the person companies hire when they're running out of road down which to kick the can, so my opinion there may not be universal, mind you.

  • Sometimes "elegance" just makes shit hard to read.

    Write boring code[0], don't go for elegance or cool language features. Be as boring and simple as possible, repeat yourself if it makes the flow clearer than extracting an operation to a common library or function.

    This is the code that "adapts" and can be fixed 3 years after the elegant coder has left for another greenfield unicorn where they can use the latest paradigms.

    [0] https://berthub.eu/articles/posts/on-long-term-software-deve...

  • People sometimes conflate inelegance with buggy code, where the market fit and value matter more than code elegance. Bugs still are not acceptable even in your MVP. Actually I think buggy software especially if those bugs destroy user experience, will kill products. It’s not 2010 anymore. There are a lot of less buggy software out there and attention spans are narrower than before.

    edit: typo

  • > I keep hearing this but I don’t understand. If inelegant code means more bugs that are harder to fix later, that translates into negative business value.

    That's a rather short-sighted opinion. Ask yourself how "inelegant code" find it's way into a codebase, even with working code review processes.

    The answer more often than not is what's typically referred to as tech debt driven development. Meaning, sometimes a hacky solution with glaring failure modes left unaddressed is all it takes to deliver a major feature in a short development cycle. Once the feature is out, it becomes less pressing to pay off that tech debt because the risk was already assumed and the business value was already created.

    Later you stumble upon a weird bug in your hacky solution. Is that bug negative business value?

    • You not only stumble upon a weird bug in your hacky solution that takes engineering weeks to debug, but your interfaces are fragile so feature velocity drops (bugs reproduce and unless you address reproduction rate you end up fixing bugs only) and things are so tightly coupled that every two line change is now multi-week rewrite.

      Look at e.g. facebook. That site has not shipped a feature in years and every time they ship something it takes years to make it stable again. A year or so ago facebook recognized that decades of fighting abuse led them nowhere and instead of fixing the technical side they just modified policies to openly allow fake accounts :D Facebook is 99% moltbook bot-to-bot trafic at this point and they cannot do anything about it. Ironically, this is a good argument against code quality: if you manage to become large enough to become a monopoly, you can afford to fix tech debt later. In reality, there is one unicorn for every ten thousand of startups that crumbled under their own technical debt.

      2 replies →

    • Of course a bug is negative business value. Perhaps the benefit of shipping faster was worth the cost of introducing bugs, but that doesn't make it not a cost.

      5 replies →

  • Perhaps this was never actually true. Did anyone do an A/B test with messy code vs beautiful code?

Well, it takes time to assess and adapt, and large organizations need more time than smaller ones. We will see.

In my experience the limiting factor is doing the right choices. I've got a costumer with the usual backlog of features. There are some very important issues in the backlog that stay in the backlog and are never picked for a sprint. We're doing small bug fixes, but the big ones. We're doing new features that are in part useless because of the outstanding bugs that prevent customers from fully using them. AI can make us code faster but nobody is using it to sort issues for importance.

  • > nobody is using it to sort issues for importance

    True, and I'd add the reminder that AI doesn't care. When it makes mistakes it pretends to be sorry.

    Simulated emotion is dangerous IMHO, it can lead to undeserved trust. I always tell AI to never say my name, and never use exclamation points or simulated emotion. "Be the cold imperfect calculator that you are."

    When it was giving me complements for noticing things it failed to, I had to put a stop to that. Very dangerous. When business decisions or important technical decisions are made by an entity that literally is incapable of caring, but instead pretends to like a sociopath, that's when trouble brews.

  LLM has been hollowing out the mid and lower end of engineering. But has not eroded highest end. Otherwise all the LLM companies wouldn’t pay for talent, they’d just use their own LLM.

The talent isn't used for writing code anymore though. They're used for directing, which an LLM isn't very good at since it has limited real world experience, interacting with other humans, and goals.

OpenAI has said they're slowing down hiring drastically because their models are making them that much more productive. Codex itself is being built by Codex. Same with Claude Code.

  • Source: Trust me, bro. A company selling an AI model telling others their AI model is so good that it's building itself. What could possibly motivate them to say that?

    Remember a few years ago when Sam Altman said we had to pause AI development for 6 months because otherwise we would have the singularity and it would end the world? Yeah, about that...

Based on my experience using Claude opus 4.5, it doesn't really even get functionality correct. It'll get scaffolding stuff right if you tell it exactly what you want but as soon as you tell it to do testing and features it ranges from mediocre to worse than useless.