Comment by atonse

25 days ago

> LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

I’ve always said I’m a builder even though I’ve also enjoyed programming (but for an outcome, never for the sake of the code)

This perfectly sums up what I’ve been observing between people like me (builders) who are ecstatic about this new world and programmers who talk about the craft of programming, sometimes butting heads.

One viewpoint isn’t necessarily more valid, just a difference of wiring.

I noticed the same thing, but wasn't able to put it into words before reading that. Been experimenting with LLM-based coding just so I can understand it and talk intelligently about it (instead of just being that grouchy curmudgeon), and the thought in the back of my mind while using Claude Code is always:

"I got into programming because I like programming, not whatever this is..."

Yes, I'm building stupid things faster, but I didn't get into programming because I wanted to build tons of things. I got into it for the thrill of defining a problem in terms of data structures and instructions a computer could understand, entering those instructions into the computer, and then watching victoriously while those instructions were executed.

If I was intellectually excited about telling something to do this for me, I'd have gotten into management.

  • Same same. Writing the actual code is always a huge motivator behind my side projects. Yes, producing the outcome is important, but the journey taken to get there is a lot of fun for me.

    I used Claude Code to implement a OpenAI 4o-vision powered receipt scanning feature in an expense tracking tool I wrote by hand four years ago. It did it in two or three shots while taking my codebase into account.

    It was very neat, and it works great [^0], but I can't latch onto the idea of writing code this way. Powering through bugs while implementing a new library or learning how to optimize my test suite in a new language is thrilling.

    Unfortunately (for me), it's not hard at all to see how the "builders" that see code as a means to an end would LOVE this, and businesses want builders, not crafters.

    In effect, knowing the fundamentals is getting devalued at a rate I've never seen before.

    [^0] Before I used Claude to implement this feature, my workflow for processing receipts looked like this: Tap iOS Shortcut, enter the amount, snap a pic of the receipt, type up the merchant, amount and description for the expense, then have the shortcut POST that to my expenses tracking toolkit which, then, POSTs that into a Google Sheet. This feature amounted the need for me to enter the merchant and amount. Unfortunately, it often took more time to confirm that the merchant, amount and date details OpenAI provided were correct (and correct it when details were wrong, which was most of the the time) than it did to type out those details manually, so I just went back to my manual workflow. However, the temptation to just glance at the details and tap "This looks correct" was extremely high, even if the info it generated was completely wrong! It's the perfect analogue to what I've been witnessing throughout the rise of the LLMs.

  • Same. This kind of coding feels like it got rid of the building aspect of programming that always felt nice, and it replaced it entirely with business logic concerns, product requirements, code reviews, etc. All the stuff I can generally take or leave. It's like I'm always in a meeting.

    >If I was intellectually excited about telling something to do this for me, I'd have gotten into management.

    Exactly this. This is the simplest and tersest way of explaining it yet.

  • This gets at the heart of the quality of results issues a lot of people are talking about elsewhere here. Right now, if you treat them as a system where you can tell it what you want and it will do it for you, you're building a sandcastle. Instead of that, also describe the correct data structures and appropriate algorithms to use against them, as well as the particulars of how you want the problem solved, it's a different situation altogether. Like most systems, the quality of output is in some way determined by the quality of input.

    There is a strange insistence on not helping the LLM arrive at the best outcome in the subtext to this question a lot of times. I feel like we are living through the John Henry legend in real time

  • > I got into it for the thrill of defining a problem in terms of data structures and instructions a computer could understand, entering those instructions into the computer, and then watching victoriously while those instructions were executed.

    You can still do that with Claude Code. In fact, Claude Code works best the more granular your instructions get.

    • > Claude Code works best the more granular your instructions get.

      So best feed it machine code?

  • Funny you say that. Because I have never enjoyed management as much as being hands on and directly solving problems.

    So maybe our common ground is that we are direct problem solvers. :-)

    • For some reason this makes me think of a jigsaw puzzle. People usually complete these puzzles because they enjoy the process where on the end you get a picture that you can frame if you want to. Some people seem to want to get the resulting picture. No interest in process at all.

      I guess that's the same people who went to all those coding camps during their hay day because they heard about software engineering salaries. They just want the money.

      1 reply →

IMO, this isn't entirely a "new world" either, it is just a new domain where the conversation amplifies the opinions even more (weird how that is happening in a lot of places)

What I mean by that: you had compiled vs interpreted languages, you had types vs untyped, testing strategies, all that, at least in some part, was a conversation about the tradeoffs between moving fast/shipping and maintainability.

But it isn't just tech, it is also in methodologies and the words use, from "build fast and break things" and "yagni" to "design patterns" and "abstractions"

As you say, it is a different viewpoint... but my biggest concern with where are as industry is that these are not just "equally valid" viewpoints of how to build software... it is quite literally different stages of software, that, AFAICT, pretty much all successful software has to go through.

Much of my career has been spent in teams at companies with products that are undergoing the transition from "hip app built by scrappy team" to "profitable, reliable software" and it is painful. Going from something where you have 5 people who know all the ins and outs and can fix serious bugs or ship features in a few days to something that has easy clean boundaries to scale to 100 engineers of a wide range of familiarities with the tech, the problem domain, skill levels, and opinions is just really hard. I am not convinced yet that AI will solve the problem, and I am also unsure it doesn't risk making it worse (at least in the short term)

  • “””

    Much of my career has been spent in teams at companies with products that are undergoing the transition from "hip app built by scrappy team" to "profitable, reliable software" and it is painful. Going from something where you have 5 people who know all the ins and outs and can fix serious bugs or ship features in a few days to something that has easy clean boundaries to scale to 100 engineers of a wide range of familiarities with the tech, the problem domain, skill levels, and opinions is just really hard. I am not convinced yet that AI will solve the problem, and I am also unsure it doesn't risk making it worse (at least in the short term)

    “””

    This perspective is crucial. Scale is the great equalizer / demoralizer, scale of the org and scale of the systems. Systems become complex quickly, and verifiability of correctness and function becomes harder. Companies that built from day with AI and have AI influencing them as they scale, where does complexity begin to run up against the limitations of AI and cause regression? Or if all goes well, amplification?

But how can you be a responsible builder if you don't have trust in the LLMs doing the "right thing"? Suppose you're the head of a software team where you've picked up the best candidates for a given project, in that scenario I can see how one is able to trust the team members to orchestrate the implementation of your ideas and intentions, with you not being intimately familiar with the details. Can we place the same trust in LLM agents? I'm not sure. Even if one could somehow prove that LLM are very reliable, the fact an AI agents aren't accountable beings renders the whole situation vastly different than the human equivalent.

  • Trust but verify:

    I test all of the code I produce via LLMs, usually doing fairly tight cycles. I also review the unit test coverage manually, so that I have a decent sense that it really is testing things - the goal is less perfect unit tests and more just quickly catching regressions. If I have a lot of complex workflows that need testing, I'll have it write unit tests and spell out the specific edge cases I'm worried about, or setup cheat codes I can invoke to test those workflows out in the UI/CLI.

    Trust comes from using them often - you get a feeling for what a model is good and bad at, and what LLMs in general are good and bad at. Most of them are a bit of a mess when it comes to UI design, for instance, but they can throw together a perfectly serviceable "About This" HTML page. Any long-form text they write (such as that About page) is probably trash, but that's super-easy to edit manually. You can often just edit down what they write: they're actually decent writers, just very verbose and unfocused.

    I find it similar to management: you have to learn how each employee works. Unless you're in the Top 1%, you can't rely on every employee giving 110% and always producing perfect PRs. Bugs happen, and even NASA-strictness doesn't bring that down to zero.

    And just like management, some models are going to be the wrong employee for you because they think your style guide is stupid and keep writing code how they think it should be written.

  • You don't simply put a body in a seat and get software. There are entire systems enabling this trust: college, resume, samples, referral, interviews, tests and CI, monitoring, mentoring, and performance feedback.

    And accountability can still exist? Is the engineer that created or reviewed a Pull Request using Claude Code less accountable then one that used PICO?

    • > And accountability can still exist? Is the engineer that created or reviewed a Pull Request using Claude Code less accountable then one that used PICO?

      The point is that in the human scenario, you can hold the human agents accountable. You cannot do that with AI. Of course, you as the orchestrator of agents will be accountable to someone, but you won't have the benefit of holding your "subordinates" accountable, which is what you do in a human team. IMO, this renders the whole situation vastly different (whether good or bad I'm not sure).

      2 replies →

I think he's really getting at something there. I've been thinking about this a lot (in the context of trying to understand the persistent-on-HN skepticism about LLMs), and the framing I came up with[1] is top-down vs. bottom-up dev styles, aka architecting code and then filling in implementations, vs. writing code and having architecture evolve.

[1] https://www.klio.org/theory-of-llm-dev-skepticism/

I remember leaving university going into my first engineering job, thinking "Where is all the engineering? All the problem solving and building complex system? All the math and science? Have I been demoted to a lowly programmer?"

Took me a few years to realize that this wasn't a universal feeling, and that many others found the programming tasks more fulfilling than any challenging engineering. I suppose this is merely another manifestation of the same phenomena.

Maybe there's an intermediate category: people who like designing software? I personally find system design more engaging than coding (even though I enjoy coding as well). That's different from just producing an opaque artifact that seems to solve my problem.

So far I haven't seen it actually be effective at "building" in a work context with any complexity, and this despite some on our team desperately trying to make that the case.

  • I have! You have to be realistic about the projects. The more irreducible local context it needs, the less useful it will be. Great for greenfield code, oneshots, write once read once run for months.

  • Agreed. I don’t care for engineering or coding, and would gladly give it up the moment I can. I’m also running a one man business where every hour counts (and where I’m responsible for maintaining every feature).

    The fact of the matter is LLMs produce lower quality at higher volumes in more time than it would take to write it myself, and I’m a very mediocre engineer.

    I find this seperation of “coding” vs “building” so offensive. It’s basically just saying some people are only concerned with “inputs”, while others with “outputs”. This kind of rhetoric is so toxic.

    It’s like saying LLM art is separating people into people who like to scribble, and people who like to make art.

    • Would you accept 'people who like to make art, and people who like to commission somebody to make art and give them lots of notes in the process'?

      1 reply →

I think the division is more likely tied to writing. You have to fundamentally change how you do your job, from one of writing a formal language for a compiler to one of writing natural language for a junior-goldfish-memory-allstar-developer, closer to management then to contributor.

This distinction to me separates the two primary camps

> > LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

> I’ve always said I’m a builder even though I’ve also enjoyed programming (but for an outcome, never for the sake of the code)

> This perfectly sums up what I’ve been observing between people like me (builders) who are ecstatic about this new world and programmers who talk about the craft of programming, sometimes butting heads.

That's one take, sure, but it's a specially crafted one to make you feel good about your position in this argument.

The counter-argument is that LLM coding splits up engineers based on those who primarily like engineering and those who like managing.

You're obviously one of the latter. I, OTOH, prefer engineering.

  • I prefer engineering too, I tried management and I hated it.

    It's just the level of engineering we're split on. I like the type of engineering where I figure out the flow of data, maybe the data structures and how they move through the system.

    Writing the code to do that is the most boring part of my job. The LLM does it now. I _know_ how to do it, I just don't want to.

    It all boils down to communication in a way. Can you communicate what you want in a way others (in this case a language model) understands? And the parts you can't communicate in a human language, can you use tools to define those (linters, formatters, editorconfig)?

    I've done all that with actual humans for ... a decade? So applying the exact same thing to a machine is weirdly more efficient, it doesn't complain about the way I like to have my curly braces - it just copies the defined style. With humans I've found out that using impersonal tooling to inspect code style and flaws has a lot less friction than complaining about it in PR reviews. If the CI computer says no, people don't complain, they fix it.

> > LLM coding will split up engineers based on those who primarily liked coding and those who primarily liked building.

This is much less significant than the fact LLMs split engineers on those who primarily like quality v. those who primarily like speed.

I feel like this is the core issue that will actually stall LLM coding tools short of actually replacing coding work at large.

'Coders' make 'builders' keep the source code good enough so that 'builders' can continue building without breaking what they built.

If 'builders' become x10 productive and 'coders' become unable to keep up with unsurmountable pile of unmaintainable mess that 'builders' proudly churn out, 'bullders' will start to run into impossibility to build further without starting over and over again hoping that agents will be able to get it right this time.

  • "Coders" can code tools that programmatically define quality. We have like 80% of those already.

    Then force the builders to use those tools to constrain their output.

Yeah, I think this is a bit of insight I had not realized / been able to word correctly yet. There's developers who can let Claud go at it, and be fearless about it like me (though I mostly do it for side projects, but WOW) and then there's developers who will use it like a hammer or axe to help cut down or mold whatever is in their path.

I think both approaches are okay, the biggest thing for me is the former needs to test way more, and review the code more, as developers we don't read code enough, with the "prompt and forget" approach we have a lot of free time we could spend reading the code, asking the model to refactor and refine the code. I am shocked when I hear about hundreds of thousands of lines in some projects. I've rebuilt Beads from the ground up and I'm under 10 lines of code.

So we're going to have various level of AI Code Builders if you will: Junior, Mid, Senior, Architect. I don't know if models will ever pick up the slack for Juniors any time soon. We would need massive context windows for models, and who will pay for that? We need a major AI breakthrough to where the cost goes down drastically before that becomes profitable.

I think there's a place for both.

We have services deployed globally serving millions of customers where rigor is really important.

And we have internal users who're building browser extensions with AI that provide valuable information about the interface they're looking at including links to the internal record management, and key metadata that's affecting content placement.

These tools could be handed out on Zip drives in the street and it would just show our users some of the metadata already being served up to them, but it's amazing to strip out 75% of the process of certain things and just have our user (in this case though, it's one user who is driving all of this, so it does take some technical inclination) build out these tools that save our editors so much time when doing this before would have been months and months and months of discovery and coordination and designs that probably wouldn't actually be as useful in the end after the wants of the user are diluted through 18 layers of process.

I like building, but I don't fool myself into thinking it can be done by taking shortcuts. You could build something that looks like a house for half the cost but it won't be structurally sound. That's why I care about the details. Someone has to.

The new LLM centered workflow is really just a management job now.

Managers and project managers are valuable roles and have important skill sets. But there's really very little connection with the role of software development that used to exist.

It's a bit odd to me to include both of these roles under a single label of "builders", as they have so little in common.

EDIT: this goes into more detail about how coding (and soon other kinds of knowledge work) is just a management task now: https://www.oneusefulthing.org/p/management-as-ai-superpower...

  • i don't disagree. at some point LLM's might become good enough that we wouldn't need exact technical expertise.

I enjoy both and have ended up using AI a lot differently than vibe coders. I rarely use it for generating implementations, but I use it extensively for helping me understand docs/apis and more importantly, for debugging. AI saves me so much time trying to figure out why things aren’t working and in code review.

I deliberately avoid full vibe coding since I think doing so will rust my skills as a programmer. It also really doesn’t save much time in my experience. Once I have a design in mind, implementation is not the hard part.

There's more to it than just coding Vs building though.

For a long time in my career now I've been in a situation where I'd be able to build more if I was willing to abstract myself and become a slide-merchant/coalition-builder. I don't want to do this though.

Yet, I'm still quite an enthusiastic vibe-coder.

I think it's less about coding Vs building and more about tolerance for abstraction and politics. And I don't think there are that many people who are so intolerant of abstraction that they won't let agents write a bunch of code for them.

I’ve heard something similar: “there are people who enjoy the process, and people who enjoy the outcome”. I think this saying comes moreso from artistic circles.

I’ve always considered myself a “process” person, I would even get hung-up on certain projects because I enjoyed crafting them so much.

LLM’s have taken a bit of that “process” enjoyment from me, but I think have also forced some more “outcome” thinking into my head, which I’m taking as a positive.

To me this is similar to car enthusiasms. Some people absolutely love to build their project car, it's a major part of the hobby for them. Others just love the experience of driving, so they buy ready cars or just pay someone to work on the car.

agree completely. I used to be (and still would love to be) a process person, enjoying hand writing bulletproof artisanal code. Switching to startups many years ago gave me a whole new perspective, and its been interesting the struggle between writing code and shipping. Especially when you dont know how long the code you are writing will actually live. LLMs are fantastic in that space.

makes sense if you are a data scientist where people need to be boxed into tidy little categories. but some people probably fall into both categories.

> I enjoy both and have ended up using AI a lot differently than vibe coders. I rarely use it for generating implementations, but I use it extensively for helping me understand docs/apis and more importantly, for debugging. AI saves me so much time trying to figure out why things aren’t working and in code review.

I had felt like this and still do but man, at some point, I feel like the management churn feels real & I just feel suffering from a new problem.

Suppose, I actually end up having services literally deployed from a single prompt nothing else. Earlier I used to have AI write code but I was interested in the deployment and everything around it, now there are services which do that really neatly for you (I also really didn't give into the agent hype and mostly used browsers LLM)

Like on one hand you feel more free to build projects but the whole joy of project completely got reduced.

I mean, I guess I am one of the junior dev's so to me AI writing code on topics I didn't know/prototyping felt awesome.

I mean I was still involved in say copy pasting or looking at the code it generates. Seeing the errors and sometimes trying things out myself. If AI is doing all that too, idk

For some reason, recently I have been disinterested in AI. I have used it quite a lot for prototyping but I feel like this complete out of the loop programming just very off to me with recent services.

I also feel like there is this sense of if I buy for some AI thing, to maximally extract "value" out of it.

I guess the issue could be that I can have vague terms or have a very small text file as input (like just do X alternative in Y lang) and I am now unable to understand the architectural decisions and the overwhelmed-ness out of it.

Probably gonna take either spec-driven development where I clearly define the architecture or development where I saw something primagen do recently which is that the AI will only manipulate code of that particular function, (I am imagining it for a file as well) and somehow I feel like its something that I could enjoy more because right now it feels like I don't know what I have built at times.

When I prototype with single file projects using say browser for funsies/any idea. I get some idea of what the code kind of uses with its dependencies and functions names from start/end even if I didn't look at the middle

A bit of ramble I guess but the thing which kind of is making me feel this is that I was talking to somebody and shwocasing them some service where AI + server is there and they asked for something in a prompt and I wrote it. Then I let it do its job but I was also thinking how I would architect it (it was some detect food and then find BMR, and I was thinking first to use any api but then I thought that meh it might be hard, why not use AI vision models, okay what's the best, gemini seems good/cheap)

and I went to the coding thing to see what it did and it actually went even beyond by using the free tier of gemini (which I guess didn't end up working could be some rate limit of my own key but honestly it would've been the thing I would've tried too)

So like, I used to pride myself on the architectural decisions I make even if AI could write code faster but now that is taken away as well.

I really don't want to read AI code so much so honestly at this point, I might as well write code myself and learn hands on but I have a problem with build fast in public like attitude that I have & just not finding it fun.

I feel like I should do a more active job in my projects & I am really just figuring out what's the perfect way to use AI in such contexts & when to use how much.

Thoughts?