← Back to context

Comment by tptacek

6 days ago

No, what's happening here is that you're using a different definition of "boilerplate" than the adopters are using. To you, "boilerplate" is literally a chunk of code you copy and paste to repeatedly solve a problem (btw: I flip my shit when people do this on codebases I work on). To them, "boilerplate" represents a common set of rote solutions to isomorphic problems. The actual lines of code might be quite different, but the approach is the same. That's not necessarily something you can copy-paste.

Coming at this from a computer-science or PLT perspective, this idea of an "abstract, repeatable meta-boilerplate" is exactly the payoff we expect from language features like strong type systems. Part of the point of rigorous languages is to create these kinds of patterns. You had total expressiveness back in assembly language! Repeatable rigor is most of the point of modern languages.

> To them, "boilerplate" represents a common set of rote solutions to isomorphic problems.

That's what libraries and frameworks are here for. And that's why no experienced engineers consider those an issue. What's truly important is the business logic, then you find a set of libraries that solves the common use cases and you write the rest. Sometimes you're in some novel space that doesn't have libraries (new programming language), but you still have specs and reference implementation that helps you out.

The actual boilerplate is when you have to write code twice because the language ecosystem don't have good macros à la lisp so you can invent some metastuff for the problem at end. (think writing routers for express.js)

  • People keep saying this. LLMs (not even agents; LLMs themselves, intrinsically) use frameworks. They're quite good at them. Frameworks make programs more legible to LLMs, not less.

    • > LLMs (not even agents; LLMs themselves, intrinsically) use frameworks.

      That's not what I see the parent comment saying. They're not saying that LLMs can't use frameworks, they're saying that if you have rote solutions that you are being forced to write over and over and over again, you shouldn't be using an LLM to automate it, you should use a framework and get that code out of your project.

      And at that point, you won't have a ton of boilerplate to write.

      The two sides to this I see online are between the people who think we need a way to automate boilerplate and setup code, and the people who want to eliminate boilerplate (not just the copy-paste kind, but also the "ugh, I've got to do this thing again that I've done 20 times" kind).

      Ideally:

      > a common set of rote solutions to isomorphic problems

      Should not be a thing you have to write very often (or if it is, you should have tools that make it as quick to implement as it would be to type a prompt into an LLM). If that kind of rote repetitive problem solving is a huge part of your job, then to borrow your phrasing: the language or the tools you're using have let you down.

    • LLMs are _really_ good at React for example, just because there's so much of it everywhere for them to learn from.

Copy pasting code that could be abstracted is not a usage of boilerplate I've ever encountered, usually it's just a reference to certain verbose languages where you have to write a bunch of repetitive low-entropy stuff to get anywhere, like getters and setters in java classes.

  • Getters and setters definitely fall into "copy/paste + macro" territory for me. Just copy/paste your field list and run a macro that turns each field into a getter and setter. Or use an IDE shortcut. Lombok obviates all this anyway of course.

> The actual lines of code might be quite different, but the approach is the same. That's not necessarily something you can copy-paste.

Assuming something like "a REST endpoint which takes a few request parameters, makes a DB query, and returns the response" fits what you're describing, you can absolutely copy/paste a similar endpoint, change the parameters and the database query, and rename a couple variables—all of which takes a matter of moments.

Naturally code that is being copy-pasted wholesale with few changes is ripe to be abstracted away, but patterns are still going to show up no matter what.

  • But the LLM will write that pretty much instantly after you've given it one example to extrapolate from.

    It'll even write basic unit tests for your CRUD API while it's at it.

    • Sure—but I can also write it pretty instantly, with some judicious copy/pasting.

      And the less instantly I can write it, the more petty nuances there are to deal with—things like non-trivial validation, a new database query function, a header that I need to access—the more ways an LLM will get it subtly wrong.

      If I treat it as more than a fancy autocomplete, I have to spend all my time cleaning up after it. And if I do treat it as fancy autocomplete, it doesn't save that much time over judicious copy/pasting.

> solutions to isomorphic problems

“Isomorphic” is a word that describes a mapping (or a transformation) that preserves some properties that we believe to be important.

The word you’re looking for is probably “similar” not “isomorphic”. It sure as hell doesn’t sound as fancy though.

But what do you make of the parent’s second paragraph? This is the way I feel as well - I would rather not spend my time asking AI to do something right that I could just do myself.

I bit the bullet last week and tried to force myself to use a solution built end to end by AI. By the time I’d finished asking it to make changes (about 25 in total), I would’ve had a much nicer time doing it myself.

The thing in question was admittedly partially specified. It was a yaml-based testing tool for running some scenarios involving load tests before and after injecting some faults in the application. I gave it the yaml schema up front, and it did a sensible job as a first pass. But then I was in the position of reading what it wrote, seeing some implicit requirements I’d not specified, and asking for those.

Had I written it myself from the start, those implicit requirements would’ve been more natural to think about in the progression of iterating on the tool. But in this workflow, I just couldn’t get in a flow state - the process felt very unnatural, not unlike how it would’ve been to ask a junior to do it and taking 25 rounds of code review. And that has always been a miserable task, difficult to force oneself to stay engaged with. By the end I was much happier making manual tweaks and wish I’d have written it myself from the start.

I'm firmly convinced at this point that there is just no arguing with the haters. At the same time, it feels like this transition is as inevitable as the transition to mobile phones. LLM's are everywhere, and there's no escaping it no matter how much you might want to.

There's always some people that will resist to the bitter end, but I expect them to be few and far between.

  • It's not really a matter of wanting to escape them. I've used them. They're not good for writing code. They feel zippy, but you have to constantly clean up after them. It's as slow and frustrating as trying to walk a junior engineer through a problem they don't fully understand. I'd rather do it myself.

    If the AI agent future is so inevitable, then why do people waste so much oxygen insisting upon its inevitability? Just wait for it in silence. It certainly isn't here yet.

  • > there's no escaping it no matter how much you might want to

    And if we accept that inevitability, it becomes a self-fulfilling prophecy. The fact that some people _want_ us to give in is a reason to keep resisting.

Your article comes across like you think every developer is the exact same as you, very egocentric piece

Not everyone is just cranking out hacked together MVPs for startups

Do you not realize there are many many other fields and domains of programming?

Not everyone has the same use case as you

  • > Not everyone is just cranking out hacked together MVPs for startups

    Now here’s the fun part: In a really restrictive enterprise environment where you’ve got unit tests with 85% code coverage requirements, linters and static typing, these AI programming assistants actually perform even better than they do when given a more “greenfield” MVP-ish assignment with lots of room for misinterpretation. The constant “slamming into guardrails” keeps them from hallucinating and causes them to correct themselves when they do.

    The more annoying boxes your job makes you tick, the more parts of the process that make you go “ugh, right, that”, the more AI programming assistants can help you.

    • Unfortunately high code coverage is misaligned with high quality code.

      If one copy-pastes a routine to make a modified version (that’s used), code coverage goes UP. Sounds like a win win for many…

      Later, someone consolidates the two near identical routines during a proper refactoring. They can even add unit tests. Guess what? Code coverage goes DOWN!

      Sure, having untested un-executed code is a truly horrible thing. But focusing on coverage can be worse…

      2 replies →

    • A human might see a gap in guardrails and avoid it, or upon seeing unexpected behavior they might be able to tell that a guardrail was breached and have some intuition of where. An LLM will happy burst through a gap in the guardrails, claim it has solved the problem, and require just as much human effort to fix and even more long-term maintenance because of less code familiarity.

      1 reply →

    • I’m not entirely sure this is the silver lining you seem to think it is

  • Which part of his article is specific to his use case? It all looks fairly general to me.

  • And not everyone working in things that aren’t “hacked together MVPs” has your experience. You can read any number of reports about code generated at FAANG, incident response tooling that gets to RCA faster, etc.

    There are obviously still things it can’t do. But the gap between “I haven’t been able to get a tool to work” and “you’re wrong about the tool being useful” is large.