← Back to context

Comment by flumpcakes

18 hours ago

> There's a new programmable layer of abstraction to master (in addition to the usual layers below) involving agents, subagents, their prompts, contexts, memory, modes, permissions, tools, plugins, skills, hooks, MCP, LSP, slash commands, workflows, IDE integrations, and ...

This sounds unbearable. It doesn't sound like software development, it sounds like spending a thousand hours tinkering with your vim config. It reminds me of the insane patchwork of sprawl you often get in DevOps - but now brought to your local machine.

I honestly don't see the upside, or how it's supposed to make any programmer worth their weight in salt 10x better.

> This sounds unbearable.

I can't see the original post because my browser settings break Twitter (I also haven't liked much of Karpathy's output), but I agree. I call this style of software development 'meeting-based programming,' because that seems to be the mental model that the designers of the tools are pursuing. This probably explains, in part, why c-suite/MBA types are so excited about the tools: meetings are how they think and work.

In a way LLMs/chatbots and 'agents' are just the latest phase of a trend that the internet has been encouraging for decades: the elimination of mental privacy. I don't mean 'privacy' in an everyday sense -- i.e. things I keep to myself and don't share. I mean 'privacy' in a more basic sense: private experience -- sitting by oneself; having a mental space that doesn't include anybody else; simply spending time with one's own thoughts.

The internet encourages us to direct our thoughts and questions outward: look things up; find out what others have said; go to wikipedia; etc. This is, I think, horribly corrosive to the very essence of being a thinking, sentient being. It's also unsurprising, I guess. Humans are social animals. We're going to find ourselves easily seduced by anything that lets us replace private experience with social experience. I suppose it was only a matter of time until someone did this with programming tools, too.

> ... or how it's supposed to make any programmer worth their weight in salt 10x better.

It doesn't. The only people I've seen claim such speedups are either not generally fluent in programming or stand to benefit financially from reinforcing this meme.

  • The speedup from AI is in the exponent.

    Just the other day ChatGPT implemented something that would have taken me a week of research to figure out: in 10 minutes. What do you call that speedup? It's a lot more than 10x.

    On other days I barely touch AI because I can write easy code faster than I can write prompts for easy code, though the autocomplete definitely helps me type faster.

    The "10x" is just a placeholder for averaging over a series of stochastic exponents. It's a way of saying "somewhere between 1 and infinity"

    • > Just the other day ChatGPT implemented something that would have taken me a week of research to figure out: in 10 minutes. What do you call that speedup? It's a lot more than 10x.

      Can you share what exactly this was? Perhaps I don't do anything exciting or challenging, but personally this hasn't happened to me so I find it hard to imagine what this could be.

      Instead of AI companies talking about their products, I think the thing to really sell it for me would be an 8 hour long video of an extremely proficient programmer using AI to build something that would have taken them a very long time if they were unassisted.

      1 reply →

  • For every conspicuous vibecoding influencer there are a bunch of experienced software engineers using them to get things done. The newest generation of models are actually pretty decent at following instructions and using existing code as a template. Building line-of-business apps is much quicker with Claude Code because once you've nicely scaffolded everything you can just tell it to build stuff and it'll do so the same way you would have in a fraction of the time. You can also use it to research alternatives to architectural approaches and tooling that you come up with so that you don't paint yourself into a corner by having not heard about some semi-niche tool that fits your use case perfectly.

    Of course I wouldn't use an LLM to #yolo some Next.js monstrosity with a flavor-of-the-week ORM and random Tailwind. I have, however, had it build numerous parts of my apps after telling it all about the mise targets and tests and architecture of the code that I came up with up front. In a way it vindicates my approach to software engineering because it's able to use the tools available to it to (reasonably) ensure correctness before it says it's done.

  • I am a professional engineer with around 10 years of experience and I use AI to work about 5x faster on a site I personally maintain (~100 DAU, so not huge, but also not nothing). I don’t work in AI so I get no financial benefit by “reinforcing this meme”.

    • Same position, different results. I'm maybe 20% faster. Writing the code is rarely the bottleneck for me, so there's limited potential in that way. When I am writing the code, things that I'd find easy and fast are a little faster (or I can leave AI doing them). Things that are hard and slow are nearly as hard and nearly as slow when using AI, I still need to maintain most of the code in my head that I'd need to without AI, because it'll get things wrong so quickly.

      I think what you're working on has a huge impact on AI's usability. If you're working on things that are simple conceptually and simple to implement, AI will do very well (including handling edge cases). If it's a hard concept, but simple execution, you can use AI to only do the execution and still get a pretty good speed boost, but not transformational. If it's a hard concept and a hard execution (as my latest project has been), then AI is really just not very good at it.

    • Oh, well if it can generate some simple code for your personal website, surely it can also be the "next level of abstraction" for the entirety of software engineering.

      3 replies →

    • > either not generally fluent in programming or stand to benefit financially from reinforcing this meme

      Then figure out which one of the two you are. Years of experience have never equated competence.

  • Our ops guy has thrown together several buggy dashboards using AI tools. They're passable but impossible to maintain.

    • I personally think that everyone knows AI produces subpar code, and that the infallible humans are just passing it along because they don't understand/care. We're starting to see the gaslighting now, it's not that AI makes you better, it's that AI makes you ship faster, and now shipping faster (with more bugs) is more important because "tech debt is an appreciating asset" in the world where AI tools can pump out features 10x faster (with the commensurate bugs/issues). We're entering the era of "move fast and break stuff" on steroids. I miss the era of software that worked.

      1 reply →

  • Practically every post on HN that mentions AI now ends up with a thread that is "I get 100X speed-up using LLMs" vs. "It made me slower and I've never met a single person in real life who has worked faster with AI."

    I'm a half-decent developer with 40 years experience. AI regularly gives me somewhere in the range of 10-100X speed-up of development. I don't benefit from a meme, I do benefit from better code delivered faster.

    Sometimes AI is a piece of crap and I work at 0.5X for an hour flogging a dead horse. But those are rarer these days.

    • I've posted this on another comment verbatim that was similar to yours, so apologies for the copy and paste:

      Can you share what exactly this was (that got you the 10-100x speedup)? Perhaps I don't do anything exciting or challenging, but personally this hasn't happened to me so I find it hard to imagine what this could be.

      Instead of AI companies talking about their products, I think the thing to really sell it for me would be an 8 hour long video of an extremely proficient programmer using AI to build something that would have taken them a very long time if they were unassisted.

As far as I can tell as a heavy coding agent user: you don’t need to know any of this and that’s a testament to how good code agent TUIs have become. All I do to be productive with a coding agent is tell it to break a problem down into tasks, store it inside beads, and then make sure each step is approved by me. I also add in a TDD requirement where it needs to build tests that fail then eventually pass.

Everything else I’ve used has been over engineered and far less impactful. What I just said above is already what many of us do anyway.

  • This sounds like my complete and utter nightmare. No art or finesse in building the thing - only an exercise in torturing language to someone who at a fundamental level doesn't understand a thing.

    • Nothing stopping you from hand sculpting software like we did in the before times.

      Mass production however won’t stop, it’s barely started literally a couple months ago and it’s the slowest and worst it’ll ever be.

      6 replies →

    • I don’t really understand how you got that from my post. I can and do drop in to refactor or work on the interesting parts of a project. At every checkpoint where I require a review I can and do make medications by hand.

      Are you complaining about code formatters or auto fix linters? What about codegen based on APIs specs? A code agent can do all of those and more. It can do all the boring parts while I get to focus on the interesting bits. It’s great.

      Here’s another fantastic use case: have an agent gen the code, think about its prototype, delete, and then rewrite it. I did that on a project with huge success: https://github.com/neurosnap/zmx

    • Not really at all like this, more like being a tech lead for a team of savants who simultaneously are great at parts of software engineering, and limited at others. Though that latter category is slimmer than a year ago…

      The point is, you can get lots of quality work out of this team if you learn to manage them well.

      If that sounds like a “complete and utter nightmare”, then don’t use AI. Hopefully you can keep up without it in the long run.

> This sounds unbearable. It doesn't sound like software development, it sounds like spending a thousand hours tinkering with your vim config

Before LLM programming, this was at least 30-50% of my time spent programming, fixing one config and build issue after another. Now I can spend way more time thinking about more interesting things.