Comment by gyomu

21 hours ago

This March 2025 post from Aral Balkan stuck with me:

https://mastodon.ar.al/@aral/114160190826192080

"Coding is like taking a lump of clay and slowly working it into the thing you want it to become. It is this process, and your intimacy with the medium and the materials you’re shaping, that teaches you about what you’re making – its qualities, tolerances, and limits – even as you make it. You know the least about what you’re making the moment before you actually start making it. That’s when you think you know what you want to make. The process, which is an iterative one, is what leads you towards understanding what you actually want to make, whether you were aware of it or not at the beginning. Design is not merely about solving problems; it’s about discovering what the right problem to solve is and then solving it. Too often we fail not because we didn’t solve a problem well but because we solved the wrong problem.

When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make. Being handed a baked and glazed artefact that approximates what you thought you wanted to make removes the very human element of discovery and learning that’s at the heart of any authentic practice of creation. Where you know everything about the thing you shaped into being from when it was just a lump of clay, you know nothing about the image of the thing you received for your penny from the vending machine."

And when programming with agentic tools, you need to actively push for the idea to not regress to the most obvious/average version. The amount of effort you need to expend on pushing the idea that deviates from the 'norm' (because it's novel), is actually comparable to the effort it takes to type something out by hand. Just two completely different types of effort.

There's an upside to this sort of effort too, though. You actually need to make it crystal clear what your idea is and what it is not, because of the continuous pushback from the agentic programming tool. The moment you stop pushing back, is the moment the LLM rolls over your project and more than likely destroys what was unique about your thing in the first place.

  • You just described the burden of outsourcing programming.

    • Outsourcing development and vibe coding are incredibly similar processes.

      If you just chuck ideas at the external coding team/tool you often get rubbish back.

      If you're good at managing the requirements and defining things well you can achieve very good things with much less cost.

    • With the basic and enormous difference that the feedback loop is 100 or even 1000x faster. Which changes the type of game completely, although other issues will probably arise as we try this new path.

      13 replies →

    • YES!

      AI assistance in programming is a service, not a tool. You are commissioning Anthropic, OpenAI, etc. to write the program for you.

  • Fair enough but I am a programmer because I like programming. If I wanted to be a product manager I could have made that transition with or without LLMs.

    • Agreed. The higher-ups at my company are, like most places, breathlessly talking about how AI has changed the profession - how we no longer need to code, but merely describe the desired outcome. They say this as though it’s a good thing.

      They’re destroying the only thing I like about my job - figuring problems out. I have a fundamental impedance mismatch with my company’s desires, because if someone hands me a weird problem, I will happily spend all day or longer on that problem. Think, hypothesize, test, iterate. When I’m done, I write it up in great detail so others can learn. Generally, this is well-received by the engineer who handed the problem to me, but I suspect it’s mostly because I solved their problem, not because they enjoyed reading the accompanying document.

      8 replies →

    • I’m a programmer (well half my job) because I was a short (still short) fat (I got better) kid with a computer in the 80s.

      Now, the only reason I code and have been since the week I graduated from college was to support my insatiable addictions to food and shelter.

      While I like seeing my ideas come to fruition, over the last decade my ideas were a lot larger than I could reasonably do over 40 hours without having other people working on projects I lead. Until the last year and a half where I could do it myself using LLMs.

      Seeing my carefully designed spec that includes all of the cloud architecture get done in a couple of days - with my hands on the wheel - that would have taken at least a week with me doing some work while juggling dealing with a couple of other people - is life changing

      2 replies →

    • I became an auto mechanic because I love machining heads, and dropping oil pans to inspect, and fitting crankshafts in just right, and checking fuel filters, and adjusting alternators.

      If I wanted to work on electric power systems I would have become an electrician.

      (The transition is happening.)

  • I can't help but imagine training horses vs training cats. One of them is rewarding, a pleasure, beautiful to see, the other is frustrating, leaves you with a lot of scratches and ultimately both of you "agreeing" on a marginal compromise.

    • Right now vibe coding is more like training cats. You are constantly pushing against the model's tendency to produce its default outputs regardless of your directions. When those default outputs are what you want - which they are in many simple cases of effectively English-to-code translation with memorized lookup - it's great. When they are not, you might as well write the code yourself and at least be able to understand the code you've generated.

      2 replies →

  • This is why people thinkless of artists like Damien Hirst and Jeff Koons because their hands have never once touched the art. They have no connection to the effort. To the process. To the trail and error. To the suffer. They’ve out sourced it, monetized it, and make it as efficient as possible. It’s also soulless.

  • To me it feels a bit like literate programming, it forces you to form a much more accurate idea of your project before your start. Not a bad thing, but can be wasteful also when eventually you realise after the fact that the idea was actually not that good :)

    • Yeah, it's why I don't like trying to write up a comprehensive design before coding in the first place. You don't know what you've gotten wrong until the rubber meets the road. I try to get a prototype/v1 of whatever I'm working on going as soon as possible, so I can root out those problems as early as possible. And of course, that's on top of the "you don't really know what you're building until you start building it" problem.

  • > need to make it crystal clear

    That's not an upside in that it's unique to LLM vs human written code. When writing it yourself, you also need to make it crystal clear. You do that in the language of implementation.

    • And programming languages are designed for clarifying the implementation details of abstract processes; while human language is this undocumented, half grandfathered in, half adversarially designed instrument for making apes get along (as in, move in the same general direction) without excessive stench.

      The humane and the machinic need to meet halfway - any computing endeavor involves not only specifying something clearly enough for a computer to execute it, but also communicating to humans how to benefit from the process thus specified. And that's the proper domain not only of software engineering, but the set of related disciplines (such as the various non-coding roles you'd have in a project team - if you have any luck, that is).

      But considering the incentive misalignments which easily come to dominate in this space even when multiple supposedly conscious humans are ostensibly keeping their eyes on the ball, no matter how good the language machines get at doing the job of any of those roles, I will still intuitively mistrust them exactly as I mistrust any human or organization with responsibly wielding the kind of pre-LLM power required for coordinating humans well enough to produce industrial-scale LLMs in the first place.

      What's said upthread about the wordbox continually trying to revert you to the mean as you're trying to prod it with the cowtool of English into outputting something novel, rings very true to me. It's not an LLM-specific selection pressure, but one that LLMs are very likely to have 10x-1000xed as the culmination of a multigenerational gambit of sorts; one whose outset I'd place with the ever-improving immersive simulations that got the GPU supply chain going.

  • I think harder while using agents, just not about the same things. Just because we all got a super powers doesn't make the problems go away, they just move and we still have our full brains to solve them.

    It isn't all great, skills that feel important have already started atrophying, but other skills have been strengthened. The hardest part is in being able to pace onself as well as figuring out how to start cracking certain problems.

  • Uniqueness is not the aim. Who cares if something is uniquely bad? But in any case, yes, if you use LLMs uncritically, as a substitute for reasoning, then you obviously aren't doing any reasoning and your brain will atrophy.

    But it is also true that most programming tedious and hardly enriching for the mind. In those cases, LLMs can be a benefit. When you have identified the pattern or principle behind a tedious change, an LLM can work like a junior assistant, allowing you to focus on the essentials. You still need to issue detailed and clear instructions, you still need to verify the work.

    Of course, the utility of LLMs is a signal that either the industry is bad at abstracting, or that there's some practical limit.

  • Yet another example of "comments that are only sort of true because high temperature sampling isn't allowed".

    If you use LLMs at very high temperature with samplers which correctly keep your writing coherent (i.e. Min_p, or better like top-h, P-less decoding, etc), than "regression to the mean" literally DOES NOT HAPPEN!!!!

    • Have you actually tried high temperature values for coding? Because I don’t think it’s going to do what you claim it will.

      LLMs don’t “reason” the same way humans do. They follow text predictions based on statistical relevance. So raising the temperature will more likely increase the likelihood of unexecutable pseudocode than it would create a valid but more esoteric implementation of a problem.

      4 replies →

To me it's all abstraction. I didn't write my own OS. I didn't write my own compiler. I didn't write the standard library. I just use them. I could write them but I'm happy to work on the new thing that uses what's already there.

This is no different than many things. I could grow a tree and cut it into wood but I don't. I could buy wood and nails and brackets and make furniture but I don't. I instead just fill my house/apartment with stuff already made and still feel like it's mine. I made it. I decided what's in it. I didn't have to make it all from scratch.

For me, lots of programming is the same. I just want to assemble the pieces

> When you skip the process of creation you trade the thing you could have learned to make for the simulacrum of the thing you thought you wanted to make

No, your favorite movie is not crap because the creators didn't grind their own lens. Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok) or their own game engine (Plenty of great games use Unreal or Unity)

  • When I read discussions about this sort of thing, I often find that folks look harder for similarities and patterns but once they succeed here, they ignore the differences. AI in particular is so full of this "pattern matching" style of thinking that the really significance of this tech, ie., how absolutely new and different it is, yeah it just sort of goes ignored, or even worse, machines get "pattern matched" into humans and folks argue from that point of view lol witness all the "new musicians" who vibe code disco hits, I'll invariably see the argument that AIs train on existing music just like humans do so whats the big deal?

    But these arguments and the OP's article do reinforce that AI rots brains. Even my sparing use of googles gemini and my interaction with the bots here have really dinged my ability to do simple math.

  • OS and compilers have a deterministic public interface. They obey a specification developers know, so you they can be relied on to write correct software that depends on them even without knowing the internal behavior. Generative AI does not have those properties.

    • Yes but developers don’t have a deterministic interface. I still had to be careful about writing out my specs and make sure they were followed. At least I don’t have to watch my tone when my two mid level ticket taking developers - Claude and Codex - do something stupid. They also do it a lot faster

    • But the code you’re writing is guard railed by your oversight, the tests you decide on and the type checking.

      So whether you’re writing the spec code out by hand or ask an LLM to do it is besides the point if the code is considered a means to an end, which is what the post above yours was getting at.

      1 reply →

    • > They obey a specification developers know

      Which spec? Is there a spec that says if you use a particular set of libraries you’d get less than 10 millisecond response? You can’t even know that for sure if you roll your own code, with no 3rd party libraries.

      Bugs are by definition issues arise when developers expect they code to do one thing, but it does another thing, because of unforeseen combination of factors. Yet we all are ok with that. That’s why we accept AI code. They work well enough.

      5 replies →

  • > I didn't write my own OS. I didn't write my own compiler. I didn't write the standard library. I just use them. I could write them

    Maybe, but beware assuming you could do something you haven't actually tried to do.

    Everything is easy in the abstract.

  • > No, your favorite movie is not crap because the creators didn't grind their own lens.

    But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.

    • > But Pulp Fiction would not have been a masterpiece if Tarantino just typed “Write a gangster movie.” into a prompt field.

      Doesn’t that prove the point? You could do that right now, and it would be absolute trash. Just like how right now we are nowhere close to being able to make great software with a single prompt.

      I’ve been vibecoding a side project and it has been three months of ideating, iterating, refining and testing. It would have taken me immeasurably longer without these tools, but the end result is still 100% my vision, and it has been a tremendous amount of work.

    • And if he did, why would I prefer using his prompt instead of mine?

      "Write a gangster movie that I like", instead of "...a movie this other guy likes".

      But because this is not the case, we appreciate Tarantino more than we appreciate gangster movies. It is about the process.

      4 replies →

  • The creative process is not dependent on the abstraction.

    > For me, lots of programming is the same. I just want to assemble the pieces

    How did those pieces came to be? By someone assembling other pieces or by someone crafting them together out of nowhere because nobody else had written them by the time?

    Of course you reuse other parts and abstractions to do whatever things that you're not working on but each time you do something that hasn't been done before you can't but engage the creative process, even if you're sitting on top of 50 years worth of abstractions.

    In other words, what a programmer essentially has is a playfield. And whether the playfield is a stack of transistors or coding agents, when you program you create something new even if it's defined and built in terms of the playfield.

  • >I instead just fill my house/apartment with stuff already made and still feel like it's mine.

    I'm starting to wonder if we lose something in all this convenience. Perhaps my life is better because I cook my own food, wash my own dishes, chop my own firewood, drive my own car, write my own software. Outwardly the results look better the more I outsource but inwardly I'm not so sure.

    On the subject of furnishing your house the IKEA effect seems to confirm this.

    https://en.wikipedia.org/wiki/IKEA_effect

  • I really appreciate this sentiment. It feels absolutely overwhelming the pace at which new tools and AI protocols are being released, leaving a feeling of constantly falling behind. But approaching from the other end, I can just make things that I do come up with and explore the new protocols only if I can't do the thing with what I've already grasped.

  • There are two stages to becoming a decent programmer: first you learn to use abstraction, then you learn when not to use abstraction.

    Trying to find the right level is the art. Once you learn the tools of the trade and can do abstraction, it's natural to want to abstract everything. Most programmers go through such a phase. But sometimes things really are distinct and trying to find an abstraction that does both will never be satisfactory.

    When building a house there are generally a few distinct trades that do the work: bricklayers, joiners, plumbers, electricians etc. You could try to abstract them all: it's all just joining stuff together isn't it? But something would be lost. The dangers of working with electricity are completely different to working with bricks. On the other hand, if people were too specialised it wouldn't work either. You wouldn't expect a whole gang of electricians, one who can only do lighting, one who can only do sockets, one who can only do wiring etc. After centuries of experience we've found a few trades that work well together.

    So, yes, it's all just abstraction, but you can go too far.

    • Well said, great analogy. Sometimes the level of abstraction feels arbitrary - you have to understand the circumstances that led there to see why it's not.

    • In higher end work they do have specialized lighting, branch power, and feeder electricians. And among feeder even special ones for medium voltage etc

  • > No, your favorite movie is not crap because the creators didn't grind their own lens.

    One of the reasons Barry Lyndon is over 50 years old and still looks like no other movie today is because Kubrick tracked down a few lenses originally designed for NASA and had custom mounts built for them to use with cinema cameras.

    https://neiloseman.com/barry-lyndon-the-full-story-of-the-fa...

    > Popular and highly acclaimed games not at crap because they didn't write their own physics engine (Zelda uses Havok)

    Super Mario Bros is known for having a surprisingly subtle and complex physics system that enabled the game to feel both challenging and fair even for players very new to consoles. Celeste a newer game also famous for being very difficult yet not feeling punishing does something similar:

    https://maddymakesgames.com/articles/celeste_and_towerfall_p...

    > or their own game engine (Plenty of great games use Unreal or Unity)

    And Minecraft doesn't, which is why few other games at the time of its release felt and played like it.

    You're correct that no one builds everything from scratch all the time. However, if all you ever do is cobble a few pre-made things together, I think you'll discover that nothing you make is ever that interesting or enduring in value. Sure, it can be useful, and satisfying. But the kinds of things that really leave a mark on people, that affect them deeply, always have at least some aspect where the creator got obsessive and went off the deep end and did their own thing from scratch.

    Further, you'll never learn what a transformative experience it can be to be that creator who gets obsessive about a thing. You'll miss out on discovering the weird parts of your own soul that are more fascinated by some corner of the universe than anyone else is.

    I have a lot of regrets in my life, but I don't regret the various times I've decided I've deeply dug into some thing and doing it from scratch. Often, that has turned out later to be some of the most long-term useful things I've done even though it seemed like a selfish indulgence at the time.

    Of course, it's your life. But consider that there may be a hidden cost to always skimming along across the tops of the stacks of things that already exist out there. There is growth in the depths.

  • Did you not read the post? You're talking from the space of the Builder while neglecting the Thinker. That's fine for some people, but not for others.

In 30 years across 10 jobs, the companies I’ve worked for have not paid me to “code”. They’ve paid me to use my experience to add more business value than the total cost of employing me.

I’m no less proud of what I built in the last three weeks using three terminal sessions - one with codex, one with Claude, and one testing everything from carefully designed specs - than I was when I first booted a computer, did “call -151” to get to the assembly language prompt on my Apple //e in 1986.

The goal then was to see my ideas come to life. The goal now is to keep my customers happy, get projects done on time, on budget and meets requirements and continue to have my employer put cash in my account twice a month - and formerly put AMZN stock in my brokerage account at vesting.

But you can move a layer up.

Instead of pouring all of your efforts into making one single static object with no moving parts, you can simply specify the individual parts, have the machine make them for you, and pour your heart and soul into making a machine that is composed of thousands of parts, that you could never hope to make if you had to craft each one by hand from clay.

We used to have a way to do this before LLMs, of course: we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.

Even the person making an object from clay is (probably) not refining his own clay or making his own oven.

  • > we had companies that employed many people, so that the top level of the company could simply specify what they wanted, and the lower levels only had to focus on making individual parts.

    I think this makes a perfect counter-example. Because this structure is an important reason for YC to exist and what the HN crowd often rallies against.

    Such large companies - generally - don't make good products. Large companies rarely make good products in this way. Most, today, just buy companies that built something in the GP's cited vein: a creative process, with pivots, learnings, more pivots, failures or - when successful - most often successful in an entirely different form or area than originally envisioned. Even the large tech monopolies of today originated like that. Zuckerberg never envisioned VR worlds, photo-sharing apps, or chat apps, when he started the campus-fotobook-website. Bezos did not have some 5d-chess blueprint that included the largest internet-infrastructure-for-hire when he started selling books online.

    If anything, this only strengthens the point you are arguing against: a business that operates by a "head" "specifying what they want" and having "something" figure out how to build the parts, is historically a very bad and inefficient way to build things.

  • And therein lies the crux: some people love to craft each part themselves, whereas others love to orchestrate but not manufacture each part.

    With LLMs and engineers often being forced by management to use them, everyone is pushed to become like the second group, even though it goes against their nature. The former group see the part as a means, whereas the latter view it as the end.

    Some people love the craft itself and that is either taken away or hollowed out.

  • This is really what it’s about.

    As someone that started with Machine Code, I'm grateful for compiled -even interpreted- languages. I can’t imagine doing the kind of work that I do, nowadays, in Machine Code.

    I’m finding it quite interesting, using LLM-assisted development. I still need to keep an eye on things (for example, the LLM tends to suggest crazy complex solutions, like writing an entire control from scratch, when a simple subclass, and five lines of code, will work much better), but it’s actually been a great boon.

    I find that I learn a lot, using an LLM, and I love to learn.

    • But we become watchers instead of makers.

      There is a difference between cooking and putting a ready meal into the microwave.

      Both satisfy your hunger but only one can give some kind of pride.

      8 replies →

  • Yes, but bad ingredients do not make a yummy pudding.

    Or, it's like trying to make a MacBook Pro by buying electronics boards from AliExpress and wiring them together.

  • It's more like the chess.com vs lichess example in my mind. On the one hand you have a big org, dozens of devs, on the other you have one guy doing a better job.

    It's amazing what one competent developer can do, and it's amazing how little a hundred devs end up actually doing when weighed down by beaurocracy. And lets not pretend even half of them qualify as competent, not to mention they probably don't care either. They get to work and have a 45 min coffee break, move some stuff around in the Kanban board, have another coffee break, then lunch, then foosball etc. Ad when they actually write some code it's ass.

    And sure, for those guys maybe LLMs represent a huge productivity boost. For me it's usually faster to do the work myself than to coax the bot into creating something acceptable.

    • Agreed. Most people don't do anything and this might actually get them to produce code at an acceptable rate. I find that I often know what I need to do and just hitting the LLM until it does what I want is more work than writing the damn code (the latter also being a better way to be convinced that it works, since you actually know what it does and how). People are very bad code reviewers, especially those people who don't do anything, so making them full time code reviewers always seemed very odd to me.

Supposedly when Michelangelo was asked about how he created the statue of David, he said "I just chipped away everything that wasn’t David.”

Your work is influenced by the medium by which you work. I used to be able to tell very quickly if a website was developed in Ruby on Rails, because some approaches to solve a problem are easy and some contain dragons.

If you are coding in clay, the problem is getting turned into a problem solvable in clay.

The challenge if you are directing others (people or agents) to do the work is that you don't know if they are taking into account the properties of the clay. That may be the difference between clean code - and something which barely works and is unmaintainable.

I'd say in both cases of delegation, you are responsible for making sure the work is done correctly. And, in both cases, if you do not have personal experiences in the medium you may not be prepared to judge the work.

This is an amazing quote - thank you. This is also my argument for why I can't use LLMs for writing (proofreading is OK) - what I write is not produced as a side-effect of thinking through a problem, writing is how I think through a problem.

  • Counterpoint (more devil's advocate), I'd argue it's better than an LLM writes something (e.g. the solution or thinking through of a problem) than nothing at all.

    Counterpoint to my own counterpoint, will anyone actually (want to) read it?

    counterpoint to the third degree, to loop it back around, an LLM might and I'd even argue an LLM is better at reading and ingesting long text (I'm thinking architectural documentation etc) than humans are. Speaking for myself, I struggle to read attentively through e.g. a document, I quickly lose interest and scan read or just focus on what I need instead.

    • I kinda saw this happen in realtime on reddit yesterday. Someone asked for advice on how to deal with a team that was in over their heads shipping slop. The crux of their question was fair, but they used a different LLM to translate their original thoughts from their native language into English. The prompt was "translate this to english for a reddit post" - nothing else.

      The LLM adding a bunch of extra formatting to add emphasis and structure to what might have originally been a bit of a ramble, but obviously human written. The comments absolutely lambasted this OP for being a hypocrite complaining about their team using AI, but then seeing little problem with posting what is obviously an AI generated question because the OP didn't deem their English skills good enough to ask the question directly.

      I'm not going to pass judgement on this scenario, but I did think the entire encounter was a "fun" anecdote in addition to your comments.

      Edit: wrods

      2 replies →

  • Writing is how I think through a problem too, but that also applies to writing and communicating with an AI coding agent. I don't need to write the code per se to do the thinking.

    • You could write pseudocode as well. Bit fo someone who is familiar with a programming language, it’s just faster to use the latter. And if you’re really familiar with the language, you start thinking in it.

I personally have found success with an approach that's the inverse of how agents are being used generally.

I don't allow my agent to write any code. I ask it for guidance on algorithms, and to supply the domain knowledge that I might be missing. When using it for game dev for example, I ask it to explain in general terms how to apply noise algorithms for procedural generation, how to do UV mapping etc, but the actual implementation in my language of choice is all by hand.

Honestly, I think this is a sweet spot. The amount of time I save getting explanations of concepts that would otherwise get a bit of digging to get is huge, but I'm still entirely in control of my codebase.

  • Yep, this is the sweet spot. Though I still let it type code a lot - boilerplate stuff I’d be bored out of my mind typing. And I’ve found it has an extremely high success rate typing that code on top of its very easy for me to review that code. No friction at all. Granted this is often no larger than 100 lines or so (across various files).

    If it takes you more than a few seconds or so to understand code an agent generated you’re going to make mistakes. You should know exactly what it’s going to produce before it produces it.

Coding is not at all like working a lump of clay unless you’re still writing assembly.

You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs. The artificial syntax that you use to move the bricks around is the thing you call coding.

The human element of discovery is still there if a robot stacks the bricks based on a different set of syntax (Natural Language), nothing about that precludes authenticity or the human element of creation.

  • It depends what you're doing not really what you do it with.

    I can do some crud apps where it's just data input to data store to output with little shaping needed. Or I can do apps where there's lots of filters, actions and logic to happen based on what's inputted that require some thought to ensure actually solve the problem it's proposed for.

    "Shaping the clay" isn't about the clay, it's about the shaping. If you have to make a ball of clay and also have to make a bridge of Lego a 175kg human can stand on, you'll learn more about Lego and building it than you will about clay.

    Get someone to give you a Lego instruction sheet and you'll learn far less, because you're not shaping anymore.

  • > You’re taking a bunch of pre-built abstractions written by other people on top of what the computer is actually doing and plugging them together like LEGOs.

    Correct. However, you will probably notice that your solution to the problem doesn't feel right, when the bricks that are available to you, don't compose well. The AI will just happily smash together bricks and at first glance it might seem that the task is done.

    Choosing the right abstraction (bricks) is part of finding the right solution. And understanding that choice often requires exploration and contemplation. AI can't give you that.

    • Not yet, anyway; I do trust LLMs for writing snippets or features at this point, but I don't trust them for setting up new applications, technology choices, architectures, etc.

      The other day people were talking about metrics, the amount of lines of code people vs LLMs could output in any given time, or the lines of code in an LLM assisted application - using LOC as a metric for productivity.

      But would an LLM ever suggest using a utility or library, or re-architecture an application, over writing their own code?

      I've got a fairly simple application, renders a table (and in future some charts) with metrics. At the moment all that is done "by hand", last features were stuff like filtering and sorting the data. But that kind of thing can also be done by a "data table" library. Or the whole application can be thrown out in favor of a workbook (one of those data analysis tools, I'm not at home in that are at all). That'd save hundreds of lines of code + maintenance burden.

      2 replies →

    • Unless you limit your scope of problem solving to only what you can do yourself, you are going to have to delegate work - your abstraction is going to be specs and delegating work to other people and ensuring it works well together and follows the specs - just like working with an LLm.

  • Exactly, and that's why I find AI coding solving this well, because I find it tedious to put the bricks together for the umpteenth time when I can just have an AI do it (which I will of course verify the code when it's done, not advocating for vibe coding here).

    This actually leaves me with a lot more time to think, about what I want the UI to look like, how I'll market my software, and so on.

  • > Coding is not at all like working a lump of clay unless you’re still writing assembly.

    Isn't the analogy apt? You can't make a working car using a lump of clay, just a car statue, a lump of clay is already an abstraction of objects you can make in reality.

  • Lego boxes include a set of instructions that implies there's only one way to assemble the contents, but that's sometimes an injustice to the creative space that Legos are built to provide. There can be a joy in algorithmically building the thing some other designers worked to make look nice, but there's a creative space outside the instructions, too.

    The risk of LLMs laying more of these bricks isn't just loss of authenticity and less human elements of discovery and creation, it's further down the path of "there's only one instruction manual in the Lego box, and that's all the robots know and build for you". It's an increased commodification of a few legacy designers' worth of work over a larger creative space than at first seems apparent.

  • I think the analogy to high level programming languages misunderstands the value of abstraction and notation. You can’t reason about the behavior of an English prompt because English is underspecified. The value of code is that it has a fairly strong semantic correlation to machine operations, and reasoning about high level code is equivalent to reasoning about machine code. That’s why even with all this advancement we continue to check in code to our repositories and leave the sloppy English in our chat history.

    • Yep. Any statement in python or others can be mapped to something that the machine will do. And it will be the same thing every single time (concurrency and race issue aside). There’s no english sentence that can be as clear.

      We’ve created formal notation to shorten writing. And computation is formal notation that is actually useful. Why write pages of specs when I could write a few lines of code?

      2 replies →

  • > plugging them together like LEGOs

    Aren't Legos known for their ability to enable creativity and endless possibilities? It doesn't feel that different from the clay analogy, except a bit coarser grained.

  • You’re both right. It just depends on the problems you’re solving and the languages you use.

    I find languages like JavaScript promote the idea that of “Lego programming” because you’re encouraged to use a module for everything.

    But when you start exploring ideas that haven’t been thoroughly explored already, and particularly in systems languages which are less zealous about DRY (don’t repeat yourself) methodologies, the you can feel a lot more like a sculptor.

    Likewise if you’re building frameworks rather than reusing them.

    So it really depends on the problems you’re solving.

    For general day-to-day coding for your average 9-to-5 software engineering job, I can definitely relate to why people might think coding is basically “LEGO engineering”.

  • changing "clay" for "legos" doesn't change the core argument. The tactile feel you get for the medium as you work it with your hands and the "artificial syntax" imposed by the medium.

> Being handed a baked and glazed artefact that approximates what you thought you wanted to make

Isn't this also an overstatement, and the problem is worse. That is - the code being handed back is a great prototype, needs polishing/finishing, and is ignorant of obvious implicit edge cases unless you explicitly innumerate all of them in your prompts??

For me, the state of things reminds me of a bad job I had years ago.

Worked with a well-regarded long tenured but truculent senior engineer who was immune to feedback due to his seniority. He committed code that either didn't run, didn't past tests, or implemented only the most obvious happy path robotically literal interpretation of requirements.

He was however, very very fast... underbidding teammates on time estimates by 10x.

He would hand back the broken prototype and we'd then spend the 10x time making his code actually something you can run in production.

Management kept pushing this because he had a great reputation, promised great things, and every once in a while did actually deliver stuff fast. It took years for management to come around to the fact that this was not working.

For me it’s a related but different worry. If I’m no longer thinking deeply, then maybe my thinking skills will simply atrophy and die. Then when I really need it, I won’t have it. I’ll be reduced to yanking the lever on the AI slot machine, hoping it comes up with something that’s good enough.

But at that point, will I even have the ability to distinguish a good solution from a bad one? How would I know, if I’ve been relying on AI to evaluate if ideas are good or not? I’d just be pushing mediocre solutions off as my own, without even realising that they’re mediocre.

I relate to this. But also, isn't it just that every human endeavor goes through an evolution from craft to commodity, which is sad for the craftsmen but good for everyone else, and that we happen to be the ones living through that for software?

For instance, I think about the pervasive interstate overpass bridge. There was a time long ago when building bridges was a craft. But now I see like ten of these bridges every day, each of which is better - in the sense of how much load they can support and durability and reliability - than the best that those craftsmen of yore could make.

This doesn't mean I'm in any way immune to nostalgia. But I try to keep perspective, that things can be both sad and ultimately good.

  • If you're only building things that have been built before, then sure, though I'd argue we already had solutions for that before LLMs.

  • there is a presumption that the models we are using today are 'good enough'. by models I mean thinks like linkers and package managers, micro services and cluster management tools.

    I personally think that we're not done evolving really, and to call it quits today would leave alot of efficiency and productivity on the table

While there is still a market for artisanal furniture, dishes and clothes most people buy mass-produced dishes, clothes and furniture.

I wonder if software creation will be in a similar place. There still might be a small market for handmade software but the majority of it will be mass produced. (That is, by LLM or even software itself will mostly go away and people will get their work done via LLM instead of "apps")

  • As with furniture, it's supply vs demand, and it's a discussion that goes back decades at this point.

    Very few people (even before LLM coding tools) actually did low level "artisanal" coding; I'd argue the vast majority of software development goes into implementing features in b2b / b2c software, building screens, logins, overviews, detail pages, etc. That requires (required?) software engineers too, and skill / experience / etc, but it was more assembling existing parts and connecting them.

    Years ago there was already a feeling that a lot of software development boiled down to taping libraries together.

    Or from another perspective, replace "LLM" with "outsourcing".

  • I would argue the opposite..

    What you get right now is mass replicated software, just another copy of sap/office/Spotify/whatever

    That software is not made individually for you, you get a copy like millions of other people and there is nearly no market anymore for individual software.

    Llms might change that, we have a bunch of internal apps now for small annoying things..

    They all have there quirks, but are only accessible internally and make life a little bit easier for people working for us.

    Most of them are one shot llms things, throw away if you do not need it anymore or just one shoot again

    • The question is whether that's a good thing or not; software adages like "Not Invented Here" aren't going to go away. For personal tools / experiments it's probably fine, just like hacking together something in your spare time, but it can become a risk if you, others, or a business start to depend on it (just like spare time hacked tools).

      I'd argue that in most cases it's better to do some research and find out if a tool already exists, and if it isn't exactly how you want it... to get used to it, like one did with all other tools they used.

      1 reply →

  • Acceptance of mass production is only post establishment of quality control.

    Skipping over that step results in a world of knock offs and product failures.

    People buy Zara or H&M because they can offload the work of verifying quality to the brand.

    This was a major hurdle that mass manufacturing had to overcome to achieve dominance.

    • >Acceptance of mass production is only post establishment of quality control.

      Hence why a lot of software development is gluing libraries together these days.

This makes no sense to me. There are plenty of artists out there (e.g. El Anatsui), not to mention whole professions such as architects, who do not interact directly with what they are building, and yet can have profound relationship with the final product.

Discovering the right problem to solve is not necessarily coupled to being "hands on" with the "materials you're shaping".

  • In my company, [enterprise IT] architects are separated into two kinds. People with a CV longer than my arm who know/anticipate everything that could fail and have reached a level of understandind that I personnally call "wisdom". And theorists, who read books and norms, who focus mostly on the nominal case, and have no idea [and no interest] in how the real world will be a hard brick wall that challenges each and every idea you invent.

    Not being hands-on, and more important not LISTENING to the hands-on people and learning from them, is a massive issue in my surroundings.

    So thinking hard on something is cool. But making it real is a whole different story.

    Note: as Steve used to say, "real artists ship".

  • you think El Anatsui would concur that they didn't interact directly with what they were building? "hands on", "material you're shaping" is a metaphor

    • I don't see why his involvement, explaining to his team how exactly to build a piece, is any different from a developer explaining to an LLM how to build a certain feature, when it comes to the level of "being hands on".

      Obviously I am not comparing his final product with my code, I am simply pointing out how this metaphor is flawed. Having "workers" shape the material according to your plans does not reduce your agency.

      2 replies →

Having a background in fine art (and also knew Aral many years ago!), this prose resonates heavily with me.

Most of the OP article also resonated with me as I bounce back and forth between learning (consuming, thinking, pulling, integrating new information) to building (creating, planning, doing) every few weeks or months. I find that when I'm feeling distressed or unhappy, I've lingered in one mode or the other a little too long. Unlike the OP, I haven't found these modes to be disrupted by AI at all, in fact it feels like AI is supporting both in ways that I find exhilarating.

I'm not sure OP is missing anything because of AI per se, it might just be that they are ready to move their focus to broader or different problem domains that are separate from typing code into an IDE?

For me, AI has allowed me to probe into areas that I would have shied away from in the past. I feel like I'm being pulled upward into domains that were previously inaccessible.

I use Claude on a daily basis, but still find myself frequently hand-writing code as Claude just doesn't deliver the same results when creating out of whole cloth.

Claude does tend to make my coarse implementations tighter and more robust.

I admittedly did make the transition from software only to robotics ~6 years ago, so the breadth of my ignorance is still quite thrilling.

>> Coding is like

That description is NOT coding, coding is a subset of that.

Coding comes once you know what you need to build, coding is the process of you expressing that in a programming language and as you do so you apply all your knowledge, experience and crucially your taste, to arrive at an implementation which does what's required (functionally and non-functionally) AND is open to the possibility of change in future.

Someone else here wrote a great comment about this the other day and it was along the lines of if you take that week of work described in the GP's comment, and on the friday afternoon you delete all the code checked in. Coding is the part to recreate the check in, which would take a lot less than a week!

All the other time was spent turning you into the developer who could understand why to write that code in the first place.

These tools do not allow you to skip the process of creation. They allow you to skip aspects of coding - if you choose to, they can also elide your tastes but that's not a requirement of using them, they do respond well to examples of code and other directions to guide them in your tastes. The functional and non-functional parts they're pretty good at without much steering now but i always steer for my tastes because, e.g. opus 4.5 defaults to a more verbose style than i care for.

  • It's all individual. That's like saying writing only happens when you know exactly the story to tell. I love open a blank project with a vague idea of what I want to do, and then just start exploring while I'm coding.

    • I'm sure some coding works this way, but I'd be surprised if it's more than a small percentage of it.

I get what he's pointing at: building teaches you things the spec can't, and iteration often reveals the real problem.

That said, the framing feels a bit too poetic for engineering. Software isn't only craft, it's also operations, risk, time, budget, compliance, incident response, and maintenance by people who weren't in the room for the "lump of clay" moment. Those constraints don't make the work less human; they just mean "authentic creation" isn't the goal by itself.

For me the takeaway is: pursue excellence, but treat learning as a means to reliability and outcomes. Tools (including LLMs) are fine with guardrails, clear constraints up front and rigorous review/testing after, so we ship systems we can reason about, operate, and evolve (not just artefacts that feel handcrafted).

  • > That said, the framing feels a bit too poetic for engineering.

    I wholeheartedly disagree but I tend to believe that's going to be highly dependent on what type of developer a person is. One who leans towards the craftsmanship side or one who leans towards the deliverables side. It will also be impacted by the type of development they are exposed to. Are they in an environment where they can even have a "lump of clay" moment or is all their time spent on systems that are too old/archaic/complex/whatever to ever really absorb the essence of the problem the code is addressing?

    The OP's quote is exactly how I feel about software. I often don't know exactly what I'm going to build. I start with a general idea and it morphs towards excellence by the iteration. My idea changes, and is sharpened, as it repeatedly runs into reality. And by that I mean, it's sharpened as I write and refactor the code.

    I personally don't have the same ability to do that with code review because the amount of time I spend reviewing/absorbing the solution isn't sufficient to really get to know the problem space or the code.

"The muse visits during the act of creation, not before. Start alone."

That has actually been a major problem for me in the past where my core idea is too simple, and I don't give "the muse" enough time to visit because it doesn't take me long enough to build it. Anytime I have given the muse time to visit, they always have.

The best analogy I think is, if you just take Stack Overflow code solutions, smoosh over your code and hit compile / build, and move on without ever looking at "why it works" you're really not using your skills to the best of your ability, and it could introduce bugs you didn't expect, or completely unnecessary dependencies. With Stack Overflow you can have other people pointing out the issues with the accepted answer and giving you better options.

  • This keeps coming up again and again and again, but like how many times were you able to copy paste SO solution wholesale and just have it work? Other than for THE most simple cases (usually CSS) there would always have to be some understanding involved. Of course you don't always learn deeply every time, but the whole "copy paste off of stackoverflow" was always an exaggeration that is being used in seeming earnest.

It's very similar now, you have to surf a swell of selective ignorance that is (feels?) less reliable than the ignorance that one adopts when using a dependency one hasn't read and understood the source code for.

One must be conversant in abstractions that are themselves ephemeral and half hallucinated. It's a question of what to cling to, what to elevate beyond possible hallucinated rubbish. At some level it's a much faster version of the meastspace process and it can be extermely emotionally uncomfortable and anarchic to many.

Sometimes you want an artistic vase that captures some essential element of beauty, culture, or emotion.

Sometimes you want a utilitarian teapot to reliably pour a cup of tea.

The materials and rough process for each can be very similar. One takes a master craftsman and a lot of time to make and costs a lot of money. The other can be made on a production line and the cost is tiny.

Both have are desirable, for different people, for different purposes.

With software, it's similar. A true master knows when to get it done quick and dirty and when to take the time to ponder and think.

  • > Sometimes you want a utilitarian teapot to reliably pour a cup of tea.

    If you pardon the analogy, watch how Japanese make a utilitarian teapot which reliably pours a cup of tea.

    It's more complicated and skill-intensive than it looks.

    In both realms, making an artistic vase can be simpler than a simple utilitarian tool.

    AI is good at making (poor quality, arguably) artistic vases via its stochastic output, not highly refined, reliable tools. Tolerances on these are tighter.

    • There is a whole range of variants in between those two "artistic vs utilitarian" points. Additionally, there is a ton of variance around "artistic" vs "utilitarian".

      Artisans in Japan might go to incredible lengths to create utilitarian teapots. Artisans who graduated last week from a 4-week pottery workshop will produce a different kind quality, albeit artisan. $5.00 teapots from an East Asian mass production factory will be very different than high quality mass-produced upmarket teapots at a higher price. I have things in my house that fall into each of those categories (not all teapots, but different kinds of wares).

      Sometimes commercial manufacturing produces worse tolerances than hand-crafting. Sometimes, commercial manufacturing is the only way to get humanly unachievable tolerances.

      You can't simplify it into "always" and "never" absolutes. Artisan is not always nicer than commercial. Commercial is not always cheaper than artisan. _____ is not always _____ than ____.

      If we bring it back to AI, I've seen it produce crap, and I've also seen it produce code that honestly impressed me (my opinion is based on 24 years of coding and engineering management experience). I am reluctant to make a call where it falls on that axis that we've sketched out in this message thread.

This is very insightful, thanks. I had a similar thought regarding data science in particular. Writing those pandas expressions by hand during exploration means you get to know the data intimately. Getting AI to write them for you limits you to a superficial knowledge of said data (at least in my case).

Thanks for the quote, it definitely resonates. Distressing to see many people who can't relate to this, taking it literally and arguing that there is nothing lost the more removed they are from the process.

Honestly this sounds like a Luddite mindset (and I mean that descriptively, not to be insulting). This mindset holds us back.

You can imagine the artisans who made shirts saying the exact same thing as the first textile factories became operational.

Humans have been coders in the sense we mean for a matter of decades at most - a blip in our existence. We’re capable of far more, and this is yet another task we should cast into the machine of automation and let physical laws do the work for us.

We’re capable of manipulating the universe into doing our bidding, including making rocks we’ve converted into silicones think on our behalf. Making shirts and making code: we’re capable of so much more.

yes, this is maybe it's my preference to jump directly to coding, instead of canva to draw the gui and stuff. i would not know what to draw because the involvemt is not so deep ...or something

This is cute, but this is true for ALL activities in life. I have to constantly remind my brother that his job is not unique and if he took a few moments, he might realize, flipping burgers is also molding lumps of clay.

I think the biggest beef I have with Engineers is that for decades they more or less reduced the value of other lumps of clay and now want to throw up arms when its theirs.

Yeah? And then you continue prompting and developing, and go through a very similar iterative process, except now it's faster and you get to tackle more abstract, higher level problems.

"Most developers don't know the assembly code of what they're creating. When you skip assembly you trade the very thing you could have learned to fully understand the application you were trying to make. The end result is a sad simulacrum of the memory efficiency you could have had."

This level of purity-testing is shallow and boring.

  • I don't think this comparison holds up. With a higher-level language, the material you're building with is a formal description of the software, which can be fed back into a compiler to get a deterministic outcome.

    With an LLM, you put in a high-level description, and then check in the "machine code" (generated code).

This is beautifully written, but as a point against agentic AI coding, I just don't really get it.

It seems to assume that vibe coding or like whatever you call the Gas Town model of programming is the only option, but you don't have to do that. You don't have to specify upfront what you want and then never change or develop that as you go through the process of building it, and you don't have to accept whatever the AI gives you on the other end as final.

You can explore the affordances of the technologies you're using, modify your design and vision for what you're building as you go; if anything, I've found AI coding mix far easier to change and evolve my direction because it can update all the various parts of the code that need to be updated when I want to change direction as well as keeping the tests and specification and documentation in sync, easily and quickly.

You also don't need to take the final product as a given, a "simulacrum delivered from a vending machine": build, and then once you've gotten something working, look at it and decide that it's not really what you want, and then continue to iterate and change and develop it. Again, with AI coding, I've found this easier than ever because it's easier to iterate on things. The process is a bit faster for not having to move the text around and looking up API documentation myself, even though I'm directly dictating the architecture and organization and algorithms and even where code should go most of the time.

And with the method I'm describing, where you're in the code just as much as the AI is, just using it to do the text/API/code munging, you can even let the affordances of not just the technologies, but the source code and programming language itself effect how you do this: if you care about the code quality and clarity and organization of the code that the AI is generating, you'll see when it's trying to brute force its way past technical limitations and instead redirect it to follow the grain. It just becomes easier and more fluid to do that.

If anything, AI coding in general makes it easier to have a conversation with the machine and its affordances and your design vision and so on, then before because it becomes easier to update everything and move everything around as your ideas change.

And nothing about it means that you need to be ignorant of what's going on; ostensibly you're reviewing literally every line of code it creates and deciding what libraries and languages as well as the architecture, organization and algorithms it's using. You are aren't you? So you should know everything you need to know. In fact, I've learned several libraries and a language just from watching it work, enough that I can work with them without looking anything up, even new syntax and constructs that would have been very unfamiliar prior on my manual coding days.

I dunno, when you've made about 10,000 clay pots its kinda nice to skip to the end result, you're probably not going to learn a ton with clay pot #10,001. You can probably come up with some pretty interesting ideas for what you want the end result to look like from the onset.

I find myself being able to reach for the things that my normal pragmatist code monkey self would consider out of scope - these are often not user facing things at all but things that absolutely improve code maintenance, scalability, testing/testability, or reduce side effects.

  • Depends on the problem. If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right. Manually coding a signup flow #875 is not my idea of fun either. But if the complexity is in the implementation, it’s different. Doing complex cryptography, doing performance optimization or near-hardware stuff is just a different class of problems.

    • > If the complexity of what you are solving is in the business logic or, generally low, you are absolutely right.

      The problem is rather that programmers who work on business logic often hate programmers who are actually capable of seeing (often mathematical) patterns in the business logic that could be abstracted away; in other words: many business logic programmers hate abstract mathematical stuff.

      So, in my opinion/experience this is a very self-inflected problem that arises from the whole culture around business logic and business logic programming.

    • Coding signup flow #875 should as easy as using a snippet tool or a code generator. Everyone that explains why using an LLM is a good idea always sound like living in the stone age of programming. There are already industrial level tools to get things done faster. Often so fast that I feel time being wasted describing it in english.

      1 reply →

    • In my experience AI is pretty good at performance optimizations as long as you know what to ask for.

      Can't speak to firmware code or complex cryptography but my hunch is if it's in it's training dataset and you know enough to guide it, it's generally pretty useful.

      5 replies →

  • > you're probably not going to learn a ton with clay pot #10,001

    Why not just use a library at that point? We already have support for abstractions in programming.

Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.

  • Ironic. The frequency and predictability of this type of response — “This criticism of new technology is invalid because someone was wrong once in the past about unrelated technology” — means there might as well be an LLM posting these replies to every applicable article. It’s boring and no one learns anything.

    It would be a lot more interesting to point out the differences and similarities yourself. But then if you wanted an interesting discussion you wouldn’t be posting trite flamebait in the first place, would you?

    • Note that we still have not solved cameras or even cars.

      The biggest lesson I am learning recently is that technologists will bend over backwards to gaslight the public to excuse their own myopia.

  • Interesting comparison. I remember watching a video on that. Landscape paintings, portraits, etc, was an art that has taken an enormous nosedive. We, as humans, have missed out on a lot of art because of the invention of the camera. On the other hand, the benefits of the camera need no elaboration. Currently AI had a lot of foot guns though, which I don't believe the camera had. I hope AI gets to that point too.

    • >We, as humans, have missed out on a lot of art because of the invention of the camera.

      I so severely doubt this to the point I'd say this statement is false.

      As we go toward the past art was expensive and rare. Better quality landscape/portraits were exceptionally rare and really only commissioned by those with money, which again was a smaller portion of the population in the time before cameras. It's likely there are more high quality paintings now per capita than there were ever in the past, and the issue is not production, but exposure to the high quality ones. Maybe this is what you mean by 'miss out'?

      In addition the general increase in wealth coupled with the cost of art supplies dropping this opens up a massive room for lower quality art to fill the gap. In the past canvas was typically more expensive so sucky pictures would get painted over.

    • The footgun cameras had was exposure time.

      1826 - The Heliograph - 8+ hours

      1839 - The Daguerreotype - 15–30 Mins

      1841 - The Calotype - 1–2 Mins

      1851 - Wet Plate Collodion - 2–20 Secs

      1871 - The Dry Plate - < 1 Second.

      So it took 45 years to perfect the process so you could take an instant image. Yet we complain after 4 years of LLMs that they're not good enough.

  • > Eloquent, moving, and more-or-less exactly what people said when cameras first hit the scene.

    This is a non sequitur. Cameras have not replaced paintings, assuming this is the inference. Instead, they serve only to be an additional medium for the same concerns quoted:

      The process, which is an iterative one, is what leads you 
      towards understanding what you actually want to make, 
      whether you were aware of it or not at the beginning.
    

    Just as this is applicable to refining a software solution captured in code, just as a painter discards unsatisfactory paintings and tries again, so too is it when people say, "that picture didn't come out the way I like, let's take another one."

    • Photography’s rapid commercialisation [21] meant that many painters – or prospective painters – were tempted to take up photography instead of, or in addition to, their painting careers. Most of these new photographers produced portraits. As these were far cheaper and easier to produce than painted portraits, portraits ceased to be the privilege of the well-off and, in a sense, became democratised [22].

      Some commentators dismissed this trend towards photography as simply a beneficial weeding out of second-raters. For example, the writer Louis Figuier commented that photography did art a service by putting mediocre artists out of business, for their only goal was exact imitation. Similarly, Baudelaire described photography as the “refuge of failed painters with too little talent”. In his view, art was derived from imagination, judgment and feeling but photography was mere reproduction which cheapened the products of the beautiful [23].

      https://www.artinsociety.com/pt-1-initial-impacts.html#:~:te...

    • Cameras have not replaced paintings, assuming this is the inference.

      You wouldn't have known that, going by all the bellyaching and whining from the artists of the day.

      Guess what, they got over it. You will too.

      15 replies →

  • Source?

    • Art history. It's how we ended up with Impressionism, for instance.

      People felt (wrongly) that traditional representational forms like portraiture were threatened by photography. Happily, instead of killing any existing genres, we got some interesting new ones.