Comment by bccdee

12 days ago

> Everyone knows that debugging is twice as hard as writing a program in the first place. So if you're as clever as you can be when you write it, how will you ever debug it? — The Elements of Programming Style, 2nd edition, chapter 2

If you weren't even "clever enough" to write the program yourself (or, more precisely, if you never cultivated a sufficiently deep knowledge of the tools & domain you were working with), how do you expect to fix it when things go wrong? Chatbots can do a lot, but they're ultimately just bots, and they get stuck & give up in ways that professionals cannot afford to. You do still need to develop domain knowledge and "get stronger" to keep pace with your product.

Big codebases decay and become difficult to work with very easily. In the hands-off vibe-coded projects I've seen, that rate of decay was extremely accelerated. I think it will prove easy for people to get over their skis with coding agents in the long run.

I think this goes for many different kinds of projects. Take React, for example, or jQuery, or a multitude of other frameworks and libraries. They abstract out a lot of stuff and make it easier to build stuff! But we've also seen that with ease of building also comes ease of slop (I've seen many sloppily coded React code even before LLMs). Then react introduced hooks to hopefully reduce the slop and then somehow it got sloppy in other ways.

That's kinda how I see vibe coding. It's extremely easy to get stuff done but also extremely easy to write slop. Except now 10x more code is being generated thus 10x more slop.

Learning how to get quality robust code is part of the learning curve of AI. It really is an emergent field, changing every day.

  • Yeah I think that's an interesting point of comparison. There's definitely a phenomenon where people can take their abstractions for granted and back themselves into corners because they have no deeper understanding of what their framework does under the hood.

    The key difference with LLMs is that React was written very intentionally by smart engineers who provided a wealth of documentation to help people who need to peek under the hood of their framwork. If your LLM has written something you don't understand, though, chances are nobody does, and there's nowhere you can turn to.

    If (as Peter Naur famously argued) programming is theory building, then an abstraction like a framework lets you borrow someone else's theory. You skip developing an understanding of the underlying code and hope that you'll either never need to touch the underlying code or that, if you do, you can internalize the required theory later, as needed. LLM-generated code has no theory; you either need to supervise it closely enough to impose your own, or treat it as disposable.

    • > LLM-generated code has no theory; you either need to supervise it closely enough to impose your own, or treat it as disposable.

      Agreed! And I think that's what I'm getting at. Adding what they're now calling "skills," or writing your own, is becoming crucial to LLM-assisted development. If the LLM is writing too much slop, then there probably wasn't sufficient guidance to ensure that slop wouldn't be written.

      The first step of course is to actually check that the generated code is indeed slop, which is where many people miss the mark.