← Back to context

Comment by petersellers

6 days ago

I'm not so sure that would work well in practice. How would the inexperienced developer know that the code created by the AI was correct? What if subtle bugs are introduced that the inexperienced developer didn't catch until it went out into production? What if the developer didn't even know how to debug those problems correctly? Would they know that the code they are writing is maintainable and extensible, or are they just going to generate a new layer of code on top of the old one any time they need a new feature?

> I'm not so sure that would work well in practice. How would the inexperienced developer know that the code created by the AI was correct?

Not a problem. The industry has evolved to tolerate buggy code that barely works. In fact, in some circles that's what's already expected from the baseline. LLMs change nothing in this regard. In fact, they arguably improve upon this problem as it becomes trivial to implement extensive automated test suites.

> What if subtle bugs are introduced that the inexperienced developer didn't catch until it went out into production?

That's what is happening in the real world without LLMs entering the picture.

  • I disagree strongly with this conclusion.

    I've seen firsthand what happens to large software projects that collapse under their own weight of tech debt. The software literally could not function as intended - customers were lost, the product went under. Low quality being "expected" (which isn't true in my experience, either) is irrelevant when the software doesn't work at all.

    The chances of all of that happening are a lot higher with a lone inexperienced engineer at the wheel. You still need experienced engineers to maintain your software, period.

    > That's what is happening in the real world without LLMs entering the picture.

    The difference is that most firms have experienced software engineers to fix those defects.

    • > Low quality being "expected" (which isn't true in my experience, either) is irrelevant when the software doesn't work at all.

      Yep, fully agree. We're going through this ourselves at $CURRENT_JOB, where the instability of the platform and product as a whole due to the immensely bad decisions made in the project's past is leading to massive churn from every single customer other than the smallest ones that make us no money anyway.

      And it's not just the customers, the devs are feeling it too. There's constant fires and breakages all over the place because management doesn't care to give us any time to focus on quality, and people (myself included) are getting tired of having to read through some 10kLOC monstrosity that not even God Himself could understand, and it's made worse by the clueless management saying "Have you tried having AI find the bugs for you?" like a bunch of brainless sheep being injected with that sweet ol' VC hype machine.

      Sure, people will put up with some bugs from time to time, and I'm not even saying I could've or do make perfect choices as well. But there's only so many times people will put up with a broken experience before they cut ties and quit, and in this vibe-coded hallucination world we're entering, are people really going to be okay with the products they use day-in, day-out changing behavior drastically every single day based on whatever the AI decided to hallucinate this time around to "fix" that 1 persistent bug that can't seem to die?