Comment by bccdee
6 days ago
To quote an excellent article from last week:
> The AI has suggested a solution, but the added code is arguably useless or wrong. There is a huge decision space to consider, but the AI tool has picked one set of decisions, without any rationale for this decision.
> [...]
> Programming is about lots of decisions, large and small. Architecture decisions. Data validation decisions. Button color decisions.
> Some decisions are inconsequential and can be safely outsourced. There is indeed a ton of boilerplate involved in software development, and writing boilerplate-heavy code involves near zero decisions.
> But other decisions do matter.
(from https://lukasatkinson.de/2025/net-negative-cursor/)
Proponents of AI coding often talk about boilerplate as if that's what we spend most of our time on, but boilerplate is a cinch. You copy/paste, change a few fields, and maybe run a macro on it. Or you abstract it away entirely. As for the "agent" thing, typing git fetch, git commit, git rebase takes up even less of my time than boilerplate.
Most of what we write is not highly creative, but it is load-bearing, and it's full of choices. Most of our time is spent making those choices, not typing out the words. The problem isn't hallucination, it's the plain bad code that I'm going to have to rewrite. Why not just write it right myself the first time? People say "it's like a junior developer," but do they have any idea how much time I've spent trying to coax junior developers into doing things the right way rather than just doing them myself? I don't want to waste time mentoring my tools.
No, what's happening here is that you're using a different definition of "boilerplate" than the adopters are using. To you, "boilerplate" is literally a chunk of code you copy and paste to repeatedly solve a problem (btw: I flip my shit when people do this on codebases I work on). To them, "boilerplate" represents a common set of rote solutions to isomorphic problems. The actual lines of code might be quite different, but the approach is the same. That's not necessarily something you can copy-paste.
Coming at this from a computer-science or PLT perspective, this idea of an "abstract, repeatable meta-boilerplate" is exactly the payoff we expect from language features like strong type systems. Part of the point of rigorous languages is to create these kinds of patterns. You had total expressiveness back in assembly language! Repeatable rigor is most of the point of modern languages.
> To them, "boilerplate" represents a common set of rote solutions to isomorphic problems.
That's what libraries and frameworks are here for. And that's why no experienced engineers consider those an issue. What's truly important is the business logic, then you find a set of libraries that solves the common use cases and you write the rest. Sometimes you're in some novel space that doesn't have libraries (new programming language), but you still have specs and reference implementation that helps you out.
The actual boilerplate is when you have to write code twice because the language ecosystem don't have good macros à la lisp so you can invent some metastuff for the problem at end. (think writing routers for express.js)
People keep saying this. LLMs (not even agents; LLMs themselves, intrinsically) use frameworks. They're quite good at them. Frameworks make programs more legible to LLMs, not less.
2 replies →
Copy pasting code that could be abstracted is not a usage of boilerplate I've ever encountered, usually it's just a reference to certain verbose languages where you have to write a bunch of repetitive low-entropy stuff to get anywhere, like getters and setters in java classes.
Getters and setters definitely fall into "copy/paste + macro" territory for me. Just copy/paste your field list and run a macro that turns each field into a getter and setter. Or use an IDE shortcut. Lombok obviates all this anyway of course.
lol shadcn
> The actual lines of code might be quite different, but the approach is the same. That's not necessarily something you can copy-paste.
Assuming something like "a REST endpoint which takes a few request parameters, makes a DB query, and returns the response" fits what you're describing, you can absolutely copy/paste a similar endpoint, change the parameters and the database query, and rename a couple variables—all of which takes a matter of moments.
Naturally code that is being copy-pasted wholesale with few changes is ripe to be abstracted away, but patterns are still going to show up no matter what.
But the LLM will write that pretty much instantly after you've given it one example to extrapolate from.
It'll even write basic unit tests for your CRUD API while it's at it.
2 replies →
> solutions to isomorphic problems
“Isomorphic” is a word that describes a mapping (or a transformation) that preserves some properties that we believe to be important.
The word you’re looking for is probably “similar” not “isomorphic”. It sure as hell doesn’t sound as fancy though.
... yes, that is why I chose the word? Literally: preservation of structural similarity. Not simply "similarity", which could mean anything.
2 replies →
But what do you make of the parent’s second paragraph? This is the way I feel as well - I would rather not spend my time asking AI to do something right that I could just do myself.
I bit the bullet last week and tried to force myself to use a solution built end to end by AI. By the time I’d finished asking it to make changes (about 25 in total), I would’ve had a much nicer time doing it myself.
The thing in question was admittedly partially specified. It was a yaml-based testing tool for running some scenarios involving load tests before and after injecting some faults in the application. I gave it the yaml schema up front, and it did a sensible job as a first pass. But then I was in the position of reading what it wrote, seeing some implicit requirements I’d not specified, and asking for those.
Had I written it myself from the start, those implicit requirements would’ve been more natural to think about in the progression of iterating on the tool. But in this workflow, I just couldn’t get in a flow state - the process felt very unnatural, not unlike how it would’ve been to ask a junior to do it and taking 25 rounds of code review. And that has always been a miserable task, difficult to force oneself to stay engaged with. By the end I was much happier making manual tweaks and wish I’d have written it myself from the start.
I'm firmly convinced at this point that there is just no arguing with the haters. At the same time, it feels like this transition is as inevitable as the transition to mobile phones. LLM's are everywhere, and there's no escaping it no matter how much you might want to.
There's always some people that will resist to the bitter end, but I expect them to be few and far between.
It's not really a matter of wanting to escape them. I've used them. They're not good for writing code. They feel zippy, but you have to constantly clean up after them. It's as slow and frustrating as trying to walk a junior engineer through a problem they don't fully understand. I'd rather do it myself.
If the AI agent future is so inevitable, then why do people waste so much oxygen insisting upon its inevitability? Just wait for it in silence. It certainly isn't here yet.
> there's no escaping it no matter how much you might want to
And if we accept that inevitability, it becomes a self-fulfilling prophecy. The fact that some people _want_ us to give in is a reason to keep resisting.
There is absolutely no inevitability as long as there is a willingness to contemplate what is happening.
Your article comes across like you think every developer is the exact same as you, very egocentric piece
Not everyone is just cranking out hacked together MVPs for startups
Do you not realize there are many many other fields and domains of programming?
Not everyone has the same use case as you
> Not everyone is just cranking out hacked together MVPs for startups
Now here’s the fun part: In a really restrictive enterprise environment where you’ve got unit tests with 85% code coverage requirements, linters and static typing, these AI programming assistants actually perform even better than they do when given a more “greenfield” MVP-ish assignment with lots of room for misinterpretation. The constant “slamming into guardrails” keeps them from hallucinating and causes them to correct themselves when they do.
The more annoying boxes your job makes you tick, the more parts of the process that make you go “ugh, right, that”, the more AI programming assistants can help you.
6 replies →
Which part of his article is specific to his use case? It all looks fairly general to me.
And not everyone working in things that aren’t “hacked together MVPs” has your experience. You can read any number of reports about code generated at FAANG, incident response tooling that gets to RCA faster, etc.
There are obviously still things it can’t do. But the gap between “I haven’t been able to get a tool to work” and “you’re wrong about the tool being useful” is large.
1 reply →
At least when you mentor an actual junior developer they often learn, and you can take satisfaction in aiding the growth of a human being. All the time and effort spent coaxing an LLM to "do better" either disappears in a puff of smoke the next time it goes schizoid and needs to have its context cleared or- at best- is recorded to help a for-profit company train their next generation of products.
Like everything else about the "GenAI" fad, it boils down to extractively exploiting goodwill and despoiling the commons in order to convert VC dollars into penny-shavings.
Boilerplate is a cinch when you already know what to do.
I work in finance, I have for almost 20 years now. There are things in finance you do once every 5 years, like setting up a data source like Bloomberg in a new programming language. Now you know from the last time you did it that it’s a pain, you need to use a very low level api, handling all the tiny messages yourself, building up the response as it comes from the source in unordered packets. It’s asynchronous, there is a message queue, and what I specialize in is maths.
Now I could spend hours reading documents, putting crap together, and finally come up with some half baked code that ignores most possible error points.
Or I could use ChatGPT and leverage the fact that hundreds of implementations of the same module exist out there. And make something that just works.
That is the first ever coding question I asked an LLM and it literally saved me days of trial and error for something where my added value is next to zero.
Similarly I use LLMs a lot for small tasks that are in fact fairly difficult, and that don’t add any value to the solution. Things like converting data structures in an efficient way using Python idioms, or JavaScript 2023 features, that there is no way I can keep up with.
The thing that makes an agent special is making some kind of effort to gather the relevant context before generating. The quality of the edits from the "agent" panel in Cursor/Copilot/etc is quite a bit higher than the type-ahead suggestions or the inline prompt stuff.
Bizarrely though, it seems to be limited to grep for the moment, doesn't work with LSP yet.
OP: https://fly.io/blog/youre-all-nuts/#but-its-bad-at-rust
> (from https://lukasatkinson.de/2025/net-negative-cursor/)
looks inside
complaining about Rust code
Plus, looks like it just hard coded vales. I see this happen a lot with AI code. Even when I try to get it to not it still tends to do it.
Issues like that are simple and just create debt. Sure, it "works" now but who writes code not knowing that we're going to change things next week or next month. It's the whole reason we use objects and functions in the first place!
Yeah, only in Rust is the maximum value of an unsigned 16-bit integer 65535.
These aren't Rust-specific syntax foibles. It's not a borrow-checker mistake or anything. These are basic CS fundamentals that it's thoughtlessly fumbling.
TBH the critique is completely valid when cursor advertised shitty code on their homepage.
the rust code in question is the example on the cursor landing page though
The current image on the landing page might be even worse. It just updates Message to MessageV1. Why would you pay money for what's a string replacement?
The comment on the right says it'll help the user with protocol versioning. This is not how you do that...
Adding this comment to my HN bookmarks! Well said
> Most of what we write is not highly creative, but it is load-bearing, and it's full of choices.
The idea that you can't specify the load bearing pillars of your structure to the AI, or that it couldn't figure them out by specifying the right requirements/constraints, will not age well.
> The idea that you can't specify the load bearing pillars of your structure to the AI
But English is a subjective and fuzzy language, and the AI typically can't intuit the more subtle points of what you need. In my experience a model's output always needs further prompting. If only there were a formal, rigorous language to express business logic in! Some sort of "programming language."
> But English is a subjective and fuzzy language, and the AI typically can't intuit the more subtle points of what you need.
I disagree on the "can't". LLMs seem no better or worse than humans at making assumptions when given a description of needs, which shouldn't be surprising since they infer such things from examples of humans doing the same thing. In principle, there's nothing preventing a targeted programming system from asking clarifying questions.
> In my experience a model's output always needs further prompting.
Yes, and the early days of all tooling were crude. Don't underestimate the march of progress.
What have you written with ai that ha made you or your business money
> What have you written with ai that ha made you or your business money
I use R a little more than I should, given the simplicity of my work. Claude writes better R quicker than I can. I double check what it's doing. But it's easier to double check it used twang correctly than spend five trying to remember how to use the weird package that does propensity scoring [1].
I'm sure data analysis will still sort of be a thing. But it's just not as useful anymore in the form of a human being for most commercial applications at sub-enterprise scale.
[1] https://cran.r-project.org/web/packages/twang/index.html