Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.
Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.
Unfortunately, this group, the curious ones, IMHO is a minority.
I am going to try to put this kindly: it is very glib, and people will find it offensive and obnoxious, to implicitly round off all resistance or skepticism to incuriosity. Perhaps to alienate AI critics even further is the goal, in which case - carry on.
But if you are genuinely confused by the attitudes of your peers, try asking not "what do I have that they lack" ("curiosity"?) but "what do they see that I don't" or "what do they care about that I don't"? Is it possible that they are not enthusiastic for the change in the nature of the work? Is it possible they are concerned about "automation complacency" setting in, precisely _because_ of the ratio of "hundreds of times" writing decent code to the one time writing "something stupid", and fear that every once in a while that "something stupid" will slip past them in a way that wipes the entire net gain of AI use? Is it possible that they _don't_ feel that the typical code is "better than most engineers can write"? Is it possible they feel that the "learning" is mostly ephemera - how much "prompt engineering" advice from a year ago still holds today?
You have a choice, and it's easy to label them (us?) as Luddites clinging to the old ways out of fear, stupidity, or "incuriosity". If you really want to understand, or even change some minds, though, please try to ask these people what they're really thinking, and listen.
My feeling is that the code it generates is locally ok, but globally kind of bad. What I mean is, in a diff it looks ok. But when you start comparing it to the surrounding code, there's a pretty big lack of coherency and it'll happily march down a very bad architectural path.
In fairness, this is true of many human developers too.. but they're generally not doing it at a 1000 miles per hour and they theoretically get better at working with your codebase and learn. LLMs will always get worse as your codebase grows, and I just watched a video about how AGENTS.md actually usually results in worse outcomes so it's not like you can just start treating MD files as memory and hope it works out.
I don't think that people who don't want to use these tools or clean old ways are incurious. But I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point. Not in the future, but now. It's just that the adoption of these tools isn't evenly distributed yet.
I think there's a place for thoughtful dialogue around what this means for software engineering, but I don't think that's going to change anything at this point. If developers just don't want to participate in this new world, for whatever reason, I'm not judging them, but also I don't think the genie is going back in the bottle. There will be no movement to organize labor to protect us and there be no deus ex machina that is going to reverse course on this stuff.
I read the parent comment as calling the majority of AI users "incurious", and not referring to us who resist AI for whatever reasons. The curious AI users can obtain self-improvement, the incurious ones want money or at least custom software without caring how its made.
I don't want the means of production to be located inside companies that can only exist with a steady bubble of VC dollars. It's perfectly reasonable to try AI or use it sparingly, but not embrace it for reasons that can be articulated. Not relevant to parent commenters point, though. Maybe you are "replying" to the article?
Underlying this and similar arguments is the presumption that the "old way" was perfect. You or your colleagues weren't doing one mistake per 100 successful commits. I have been in an industry for decades, and I can tell you that I do something stupid when writing code manually quite often. The same goes for the people that I work with. So fear that the LLM will make mistakes can't really be the reason. Or if it is the reason, it isn't a reasonable objection.
you make it seem like ai hesitation is a misunderstood fringe position, but it's not. i don't think anyone is confused about why some people are uninterested in ai tooling, but we do think you're wrong and the defensive posturing lines in the sand come off as incredibly uncurious.
> But if you are genuinely confused by the attitudes of your peers, try asking not "what do I have that they lack" ("curiosity"?) but "what do they see that I don't" or "what do they care about that I don't"?
I'd argue these are good questions to ask in general, about many topics. That it's an essential skill of an engineer to ask these types of questions.
There's two critical mistake that people often make: 1) thinking there's only one solution to any given problem, and 2) that were there an absolute optima, that they've converged into the optimal region. If you carefully look at many of the problems people routinely argue about you'll find that they often are working under different sets of assumptions. It doesn't matter if it's AI vs non-AI coding (or what mix), Vim vs Emacs vs VSCode, Windows vs Mac vs Linux, or even various political issues (no examples because we all know what will happen if I do, which only illustrates my point). There are no objective answers to these questions, and global optima only have the potential to exist when highly constraining the questions. The assumptions are understood by those you closely with, but that breaks down quickly.
If your objective is to seek truth you have to understand the other side. You have to understand their assumptions and measures. And just like everyone else, these are often not explicitly stated. They're "so obvious" that people might not even know how to explicitly state them!
But if the goal is not to find truth but instead find community, then don't follow this advice. Don't question anything. Just follow and stay in a safe bubble.
We can all talk but it gets confusing. Some people argue to lay out their case and let others attack, seeking truth, updating their views as weaknesses are found. Others are arguing to social signal and strengthen their own beliefs, changing is not an option. And some people argue just because they're addicted to arguing, for the thrill of "winning". Unfortunately these can often look the same, at least from the onset.
Personally, I think this all highlights a challenge with LLMs. One that only exasperates the problem of giving everyone access to all human knowledge. It's difficult you distinguish fact from fiction. I think it's only harder when you have something smooth talking and loves to use jargon. People do their own research all the time and come to wildly wrong conclusions. Not because they didn't try, not because they didn't do hard work, and not because they're specifically dumb; but because it's actually difficult to find truth. It's why you have PhD level domain experts disagree on things in their shared domain. That's usually more nuanced, but that's also at a very high level of expertise.
I am solidly in this "curious" camp. I've read HN for the past 15(?) years. I dropped out of CS and got an art agree instead. My career is elsewhere, but along the way, understanding systems was a hobby.
I always kind of wanted to stop everything else and learn "real engineering," but I didn't. Instead, I just read hundreds (thousands?) of arcane articles about enterprise software architecture, programming language design, compiler optimization, and open source politics in my free time.
There are many bits of tacit knowledge I don't have. I know I don't have them, because I have that knowledge in other domains. I know that I don't know what I don't know about being a "real engineer."
But I also know what taste is. I know what questions to ask. I know the magic words, and where to look for answers.
For people like me, this feels like an insane golden age. I have no shortage of ideas, and now the only thing I have is a shortage of hands, eyes, and on a good week, tokens.
So from my perspective as a professional programmer, my feeling is good on you, like, you're empowered to make things and you're making them. It reminds me of people making PHP sites when the web was young and it was easier to do things.
I think where I get really irritated with the discourse is when people find something that works for them, kinda, and they're like "WELL THIS IS WHAT EVERYONE HAS TO DO NOW!" I wouldn't care if I felt like "oh, just a rando on the internet has a bad opinion", the reason this subject bothers me is words do matter and when enough people are thoughtlessly on the hype train it starts creating a culture shift that creates a lot of harm. And eventually cooler heads prevail, but it can create a lot of problems in the meantime. (Look at the damage crypto did!)
But that knowledge was never hidden or out of reach. Why not read books, manuals, or take online classes? There is free access to all these things, the only cost is time and energy.
Everyone has tons of ideas. But every good engineer (and scientist) also knows that most of our ideas fall apart when either thinking deeper or trying to implement it (same thing, just mental or not). Those nuances and details don't go away. They don't matter any less. They only become less visible. But those things falling apart is also incredibly valuable. What doesn't break is the new foundation to begin again.
The bottleneck has never been a shortage of ideas nor the hands to implement them. The bottleneck has always been complexity. As the world advances do does the complexity needed to improve it.
You think you know what taste is. Have you been cranking on real systems all these years, or have you been on the sidelines armchairing the theoretics? I'm not trying to come across as rude, but it may be unavoidable to some degree when indirect criticism becomes involved. A laboring engineer has precious little choice in the type of systems available on which to work on. Fundamentally, it's all going to be some variant of system to make money for someone else somehow, or system that burns money, but ensures necessary work gets done somehow. That's it. That's the extent of the optimization function as defined by capitalism. Taste, falls by the wayside, compared to whether or not you are in the context of the optimizers who matter, because they're at the center of the capital centralization machine making the primary decisions as to where it gets allocated, is all that matters these days. So you make what they want or you don't get paid. As an Arts person, you should understand that no matter how sublime the piece to the artist, a rumbling belly is all that currently awaits you if your taste does not align with the holders of the fattest purses to lighten. I'm not speaking from a place of contempt here; I have a Philosophy background, and reaching out as one individual of the Humanities to another. We've lost sight of the "why we do things" and let ourselves become enslaved by the balance sheets. The economy was supposed to serve the people, it's now the other way around. All we do is feed more bodies to the wood chipper. Until we wake up from that, not even the desperate hope in the matter of taste will save us. We'll just keep following the capital gradient until we end up selling the world from under ourselves because it's the only thing we have left, and there is only the usual suspects as buyers.
Ok fella. But show me something then. This is all talk.
Personally I have been able to produce a very good output with Grok in relation to a video. However, it was insanely painful and very annoying to produce. In retrospect I would've much preferred to have hired humans.
Not to mention I used about 50 free-trial Grok accounts, so who knows what the costs involved were? Tens of thousands no doubt.
But that's the problem. Something that can be so reliable at times, can also fail miserably at others. I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load. You're not just coding anymore, you're thinking about what needs to be done, and then reviewing it as if someone else wrote the code.
LLMs are great for rapid prototyping, boilerplate, that kind of thing. I myself use them daily. But the amount of mistakes Claude makes is not negligible in my experience.
> I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load.
This needs more attention. There's a lot of inhumanity in the modern workplace and modern economy, and that needs to be addressed.
AI is being dumped into the society of 2026, which is about extracting as much wealth as possible for the already-wealthy shareholder class. Any wealth, comfort, or security anyone else gets is basically a glitch that "should" be fixed.
AI is an attempt to fix the glitch of having a well-compensated and comfortable knowledge worker class (which includes software engineers). They'd rather have what few they need running hot and burning out, and a mass of idle people ready to take their place for bottom-dollar.
This is a fair observation, and I think it actually reinforces the argument. The burnout you're describing comes from treating AI output as "your code that happens to need review." It's not. It's a hypothesis. Once you reframe it that way, the workflow shifts: you invest more in tests, validation scenarios, acceptance criteria, clear specs. Less time writing code, more time defining what correct looks like. That's not extra work on top of engineering. That is the engineering now. The teams I've seen adapt best are the ones that made this shift explicit: the deliverable isn't the code, it's the proof that the code is right.
This is a fair point. The cognitive load is real. Reviewing AI output is a different kind of exhausting than writing code yourself.
Even when the output is "guided," I don't trust it. I still review every single line. Every statement. I need to understand what the hell is going on before it goes anywhere. That's non-negotiable. I think it gets better as you build tighter feedback loops and better testing around it, but I won't pretend it's effortless.
One issue is that developers have been trained for the past few decades to look for solutions to problems online by just dumping a few relevant keywords into Google. But to get the most out of AI you should really be prompting as if you were writing a formal letter to the British throne explaining the background of your request. Basic English writing skills, and the ability to formulate your thoughts in a clear manner, have become essential skills for engineering (and something many developers simply lack).
You are correct. You absolutely must fill the token space with unanbiguous requirements, or Claude will just get "creative". You don't want the AI to do creative things in the same way you don't want an intern to do the same.
That said, I have found that I can get a lot of economy from speaking in terms of jargon, computer science formalisms, well-documented patterns, and providing code snippets to guide the LLM. It's trained on all of that, and it greatly streamlines code generation and refactoring.
Amusingly, all of this turns the task of coding into (mostly) writing a robust requirements doc. And really, don't we all deserve one of those?
> But to get the most out of AI you should really be prompting as if you were writing a formal letter to the British throne explaining the background of your request. Basic English writing skills, and the ability to formulate your thoughts in a clear manner, have become essential skills for engineering (and something many developers simply lack).
That's probably why spec driven development has taken off.
The developers who can't write prompts now get AI to help with their English, and with clarifying their thoughts, so that other AI can help write their code.
> the ability to formulate your thoughts in a clear manner, have become essential skills for engineering
<Insert astronauts meme “Always has been”>
The art of programming is the art of organizing complexity, of mastering multitude and avoiding its bastard chaos as effectively as possible.
Dijkstra (1970) "Notes On Structured Programming" (EWD249), Section 3 ("On The Reliability of Mechanisms"), p. 7.
And
Some people found error messages they couldn't ignore more annoying than wrong results, and, when judging the relative merits of programming languages, some still seem to equate "the ease of programming" with the ease of making undetected mistakes.
Dijkstra (1976-79) On the foolishness of "natural language programming" (EWD 667)
Engineers will go back in and fix it when they notice a problem. Or find someone who can. AI will send happy little emoji while it continues to trash your codebase and brings it to a state of total unmaintainability.
I agree on the curiosity part, I have a non CS background but I have learned to program just out of curiosity. This led me to build production applications which companies actually use and this is before the AI era.
Now, with AI I feel like I have an assistant engineer with me who can help me build exciting things.
I'm currently teaching a group of very curious non-technical content creators at one of the firms I consult at. I set up Codex for them, created the repo to have lots of hand-holding built in - and they took off. It's been 4 weeks and we already have 3 internal tools deployed, one of which eliminated the busy work of another team so much that they now have twice the capacity. These are all things 'real' engineers and product managers could have done, but just empowering people to solve their own problems is way faster. Today, several of them came to me and asked me to explain what APIs are (They want to use the google workspace APIs for something)
I wrote out a list of topics/key words to ask AI about and teach themselves. I've already set up the integration in an example app I will give them, and I literally have no idea what they are going to build next, but I'm .. thrilled. Today was the first moment I realized, maybe these are the junior engineers of the future. The fact that they have nontechnical backgrounds is a huge bonus - one has a PhD in Biology, one a masters in writing - they bring so much to the process that a typical engineering team lacks. Thinking of writing up this case study/experience because it's been a highlight of my career.
> But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Your experience is the exact opposite of mine. I have people constantly telling me how LLMs are perfectly one shotting things. I see it from friend groups, coworkers, and even here on HN. It's also what the big tech companies are often saying too.
I'm sorry, but to say that nobody is talking about success and just concentrating on failure is entirely disingenuous. You claim the group is a minority, yet all evidence points otherwise. The LLM companies wouldn't be so successful if people didn't believe it was useful.
This is my experience too. Also, the ones not striving for simplicity and not architecting end up with giant monsters that are very unstable and very difficult to update or make robust. They usually then look for another engineer to solve their mess. Usually, the easy way for the new engineer is just to architect and then turbo-build with Claude Code. But they are stuck in sunk cost prison with their mess and can't let it go :(
The K-shaped workforce point is sharp and I think you're right. The curious ones are a minority, but they've always been the ones who moved things forward. AI just made the gap more visible :)
Your Codex case study with the content creators is fascinating. A PhD in Biology and a masters in writing building internal tools... that's exactly the kind of thing i meant by "you can learn anything now." I'm surrounded by PhDs and professors at my workplace and I'm genuinely positive about how things are progressing. These are people with deep domain expertise who can now build the tools they need. It's an interesting time. please write that up...
Maybe. The reality of software engineering is that there's a lot of mediocre developers on the market and a lot of mediocre code being written; that's part of the industry, and the jobs of engineers working with other engineers and/or LLMs is that of quality control, through e.g. static analysis, code reviews, teaching, studying, etc.
The "most engineers" not "most engineers we've hired".
But also "most engineers" aren't very good. AIs know tricks that the average "I write code for my dayjob" person doesn't know or frankly won't bother to learn.
>But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Are you serious? I've been hearing this constantly. since mid 2025.
The gaslighting over AI is really something else.
Ive also never seen jobs advertised before whose job was to lobby skeptical engineers over about how to engage in technical work. This is entirely new. There is a priesthood developing over this.
I wrote code by hand for 20 years. Now I use AI for nearly all code. I just can’t compete in speed and thoroughness. As the post says, you must guide the AI still. But if you think you can continue working without AI in a competitive industry, I am absolutely sure you will eventually have a very bad time.
Their story is clearly fake. No one is getting screenshots of broken code texted to them so often that it's daily, and if they did, everyone must hate them.
< I enjoy writing code. Let me get that out of the way first.
< I haven’t written a boilerplate handler by hand in months. I haven’t manually scaffolded a CLI in I don’t know how long. I don’t miss any of it.
Sounds like the author is confused or trying too hard to please the audience. I feel software engineering has higher expectation to move faster now, which makes it more difficult as a discipline.
I personally code data structures and algorithms for 1 - 2 hrs a day, because I enjoy it. I find it also helps keeps me sharp and prevents me from building too much cognitive debt with AI generated code.
I find most AI generated code is over engineered and needs a thorough review before being deployed into production. I feel you still have to do some of it yourself to maintain an edge. Or at least I do at my skill level.
They will never admit it, but many are scared of losing their jobs.
This threat, while not yet realized, is very real from a strictly economic perspective.
AI or not, any tool that improves productivity can lead to workforce reduction.
Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?
To make matters worse, all your competitors also have the same semi-automatic ovens...
> Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
That is actually the case with a lot of bakeries these days. But the one major difference being,the baker can rely with almost 100% reliability that the form, shape and ingredients used will be exact to the rounding error. Each time. No matter how many times they use the oven. And they don't have to invent strategies on how to "best use the ovens", they don't claim to "vibe-bake" 10x more than what they used to bake before etc... The semi-automated ovens just effing work!
Now show me an LLM that even remotely provides this kind of experience.
Eh accuracy and reliability is a different topic hashed out many times on HN. This thread is about productivity. I’m a staff engineer and I don’t know a single person not using AI. My senior engineers are estimating 40% gains in productivity.
A bit simplistic. The bakery can just expand its product range or do various other things to add work. In fact that's exactly what I would expect to happen at a tech company, ceteris paribus.
This is what I find interesting - the response from most companies is "we will need fewer engineers because of AI", not "we can build more things because of AI".
What is driving companies to want to get rid of people, rather than do more? Is it just short-term investor-driven thinking?
On another note, if you had 100 engineers and you lay almost all of them off and keep 5 super-AI-accelerated engineers, and your competitor keeps 50 of such engineers, your competitor is still able to iterate 10x as fast. So you still lay people off at the risk of falling behind.
Writing software isn't like a small bakery with fixed demand. There are always more features to build and improvements to do than capacity allows. For better or worse software products are never finished.
"You can learn anything now. I mean anything." This was true before before LLMs. What's changed is how much work it is to get an "answer". If the LLM hands you that answer, you've foregone learning that you might otherwise have gotten by (painfully) working out the answer yourself. There is a trade-off: getting an answer now versus learning for the future. I recently used an LLM to translate a Linux program to Windows because I wanted the program Right Now and decided that was more important than learning those Windows APIs. But I did give up a learning opportunity.
I'm conflicted about this. On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you. Sure, they were available before, but maybe in textbooks you needed to pay for (how quaint), or on websites that appeared on the fifth page of search results. Whatever are the externalities of that, in the short term, that part may be a net positive for learners.
On the other hand, learning is doing; if it's not at least a tiny bit hard, it's probably not learning. This is not strictly an LLM problem; it's the same issue I have with YouTube educators. You can watch dazzling visualizations of problems in mathematics or physics, and it feels like you're learning, but you're probably not walking away from that any wiser because you have not flexed any problem-solving muscles and have not built that muscle memory.
I had multiple interactions like that. Someone asked an LLM for an ELI5 and tried to leverage that in a conversation, and... the abstraction they came back feels profound to them, but is useless and wrong.
This. I feel this all the time. I love 3Blue1Brown's videos and when I watch them I feel like I really get a concept. But I don't retain it as well as I do things I learned in school.
It's possible my brain is not as elastic now in my 40s. Or maybe there's no substitute for doing something yourself (practice problems) and that's the missing part.
One factor in favor of the use of LLM as a learning tool is the poor quality of documentation. It seems we've forgotten how to write usable explanations that help readers to build a coherent model of the topic at hand.
> On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you.
The other benefit is that LLMs, for superficial topics, are the most patient teachers ever.
I can ask it to explain a concept multiple times, hoping that it'll eventually click for me, and not be worried that I'd look stupid, or that it'll be annoyed or lose patience.
It always comes down to economics and then the person and their attitude towards themselves.
Some things are worth learning deeply, in other cases the easy / fast solution is what the situation calls for.
I've thought recently that some kinds of 'learning' with AI are not really that different from using Cliffs Notes back in the day. Sometimes getting the Cliffs Notes summary was the way to get a paper done OR a way to quickly get through a boring/challenging book (Scarlet Letter, amirite?). And in some cases reading the summary is actually better than the book itself.
BUT - I think everyone could agree that if you ONLY read Cliffs Notes, you're just cheating yourself out of an education.
That's a different and deeper issue because some people simply do not care to invest in themselves. They want to do minimum work for maximum money and then go "enjoy themselves."
Getting a person to take an interest in themselves, in their own growth and development, to invite curiosity, that's a timeless problem.
So I've actually been putting more effort into deliberate practice since I started using AI in programming.
I've been a fan of Zed Shaw's method for years, of typing out interesting programs by hand. But I've been appreciating it even more now, as a way to stave off the feeling of my brain melting :)
The gross feeling I have if I go for too long without doing cardio, is a similar feeling to when I go for too long without actually writing a substantial amount of code myself.
I think that the feeling of making a sustained effort is itself something necessary and healthy, and rapidly disappearing from the world.
I’ve always like the essential/accidental complexity split. It can be hard to find, but for a problem solving perspective, it may defines what’s fun and what’s a chore.
I’ve been reading the OpenBSD lately and it’s quite nice how they’ve split the general OS concepts from the machine dependent needs. And the general way they’ve separated interfaces and implementation.
I believe that once you’ve solve the essential problem, the rest becomes way easier as you got a direction. But doing accidental problem solving without having done the essential one is pure misery.
That's not what the author means. Multiple times a day, I have conversations with LLMs about specific code or general technologies. It is very similar to having the same conversation with a colleague. Yes, the LLM may be wrong. Which is why I'm constantly looking at the code myself to see if the explanation makes sense, or finding external docs to see if the concepts check out.
Importantly, the LLM is not writing code for me. It's explaining things, and I'm coming away with verifiable facts and conceptual frameworks I can apply to my work.
Yeah, it's a great way for me to reduce activation energy to get started on a specific topic. Certainly doesn't get me all the way home, but cracks it open enough to get started.
I've managed to go my whole career using regex and never fully grokking it, and now I finally feel free to never learn!
I've also wanted to play with C and Raylib for a long time and now I'm confident in coding by hand and struggling with it, I just use LLMs as a backstop for when I get frustrated, like a TA during lab hours.
I am beginning to disagree with this, or at least I am beginning to question its universal truth. For instance, there are so many times when "learning" is an exercise at attempting to apply wrong advice many times until something finally succeeds.
For instance, retrieving the absolute path an Angular app is running at in a way that is safe both on the client and in SSR contexts has a very clear answer, but there are a myriad of wrong ways people accomplish that task before they stumble upon the Location injectable.
In cases like the above, the LLM is often able to tell you not only the correct answer the first time (which means a lot less "noise" in the process trying to teach you wrong things) but also is often able to explain how the answer applies in a way that teaches me something I'd never have learned otherwise.
We have spent the last 3 decades refining what it means to "learn" into buckets that held a lot of truth as long as the search engine was our interface to learning (and before that, reading textbooks). Some of this rhetoric begins to sound like "seniority" at a union job or some similar form of gatekeeping.
That said, there are also absolutely times (and sometimes it's not always clear that a particular example is one of those times!!) when learning something the "long" way builds our long term/muscle memory or expands our understanding in a valuable way.
And this is where using LLMs is still a difficult choice for me. I think it's less difficult a choice for those with more experience, since we can more confidently distinguish between the two, but I no longer think learning/accomplishing things via the LLM is always a self-damaging route.
Is this maybe more about the quality of the documentation? I say this 'cause my thinking is that reading is reading, it takes the same time to read the information.
How is this faster than just reading the documentation? Given that LLMs hallucinate, you have to double check everything it says against the docs anyway
I don't know, most shit I learned programming (and subsequently get paid for) is meaningless arcana. For example, Kubernetes. And for you, it's Windows APIs.
For programming in general, most learning is worthless. This is where I disagree with you. If you belong to a certain set of cultures, you overindex on this idea that math (for example) is the best way to solve problems, that you must learn all this stuff by this certain pedagogy, and that the people who are best at this are the best at solving problems, which of course is not true. This is why we have politics, and why we have great politicians who hail from cultures that are underrepresented in high levels of math study, because getting elected and having popular ideas and convincing people is the best way to solve way more problems people actually have than math. This isn't to say that procedural thinking isn't valuable. It's just that, well, jokes on you. ChatGPT will lose elections. But you can have it do procedural thinking pretty well, and what does the learning and economic order look like now? I reject this form of generalization, but there is tremendous schadenfreude about, well the math people are destroying their own relevance.
All that said, my actual expertise, people don't pay for. Nobody pays for good game design or art direction (my field). They pay because you know Unity and they don't. They can't tell (and do not pay for) the difference between a good and bad game.
Another way of stating this for the average CRUD developer is, most enterprise IT projects fail, so yeah, the learning didn't really matter anyway. It's not useful to learn how to deliver better failed enterprise IT project, other than to make money.
One more POV: the effortlessness of agentic programming makes me more sympathetic to anti intellectualism. Most people do not want to learn anything, including people at fancy colleges, including your bosses and your customers, though many fewer in the academic category than say in the corporate world. If you told me, a chatbot could achieve in hours what would take a world expert days or weeks, I would wisely spend more time playing with my kids and just wait. The waiters are winning. Even in game development (cultural product development generally). It's better to wait for these tools to get more powerful than to learn meaningless arcana.
I do disagree with the notion that you have to slog through a problem to learn efficiently. That it's either "the easy way [bad, you dont learn] or the hard way [good you do learn]" is a false dichotomy. Agents / LLMs are like having an always-on, highly adept teacher who can synthesize information in an intuitive way, and that you can explore a topic with. That's extremely efficient and effective for learning. There is maybe a tradeoff somewhat in some things, but this idea that LLMs make you not learn doesn't feel right; they allow you to learn _as much as you want and about the things that you want_, which wasn't before. You had to learn, inefficiently(!), a bunch of crap you didn't want to in order to learn the thing you _did_ want to. I will not miss those days.
I don't think your saying the same thing. Ai can help you get through the hard stuff effeciently and you'll learn. It acts as a guide, but you still do the work.
Offloading completely the hard work and just getting a summary isn't really learning.
I am running local offline small models in the old fashioned REPL style, without any agentic features. One prompt at a time.
Instead of asking for answers, I ask for specific files to read or specific command line tools with specific options. I pipe the results to a file and then load it into the CLI session. Then I turn these commands into my own scripts and documentation (in Makefile).
I forbid the model wandering around to give me tons of irrelevant markdown text or generated scripts.
I ask straight questions and look for straight answers.
One line at a time, one file at a time.
This gives me plenty of room to think what I want and how I get what I want.
Learning what we want and what we need to do to achieve it is the precious learning experience that we don’t want to offload to the machine.
> I ask straight questions and look for straight answers. One line at a time, one file at a time.
I've also taken to using the Socratic Method when interrogating an LLM. No loaded questions, squeaky clean session/context, no language that is easy to misinterpret. This has worked well for me. The information I need is in there, I just need to coax it back out.
I did exactly this for an exercise a while back. I wanted to learn Rust while coding a project and AI was invaluable for accelerating my learning. I needed to know completely off-the-wall things that involved translating idioms and practices from other languages. I also needed to know more about Rust idoms to solve specific problems and coding patterns. So I carefully asked these things, one at a time, rather than have it write the solution for me. I saved weeks if not months on that activity, and I'm at least dangerous at Rust now (still learning).
This. I'm also using an LLM very similarly and treat it like a knowledgeable co-worker I can ask for an advice or check something. I want to be the one applying changes to my codebase and then running the tests. Ok, agents may improve the efficiency but it's a slippery slope. I don't want to sit here all day watching the agents modify and re-modify my codebase, I want to do this myself because it's still fun though not as much fun as it was pre-AI
It’s kind of funny seeing all the AI hype guys talking about their 10 OpenClaw instances all running doing work and when you ask what it is, you can never get a straight answer..
For the record though, I love agentic coding. It deals with the accumulated cruft of software for me.
If execution no longer matters, then what possible ideas exist out there that both are highly valuable as well as only valuable to the first mover? If the second person to see the value in the idea can execute it in a weekend using AI tools, what value is there in the idea to begin with?
In fact the second mover advantage seems to me to be even larger than before. Let someone else get the first version out the door, then you just point your AI bot at the resulting product to copy it in a fraction of the time it took the original person to execute on it.
If anything, ideas seem to be even cheaper to me in this new world. It probably just moves what bits of execution matter even more towards sales and marketing and hype vs. executing on the actual product itself.
I think there might be some interesting spaces here opening up in the IP combined with "physical product" space. Where you need the idea as well as real-world practical manufacturing skills in order to execute. That will still be somewhat of a moat for a little while at least, but mostly at a scale where it's not worth an actual manufacturer from China to spin up a production line to compete with you at scale.
Eventually you will have to tell people what the idea is, even if it is at product launch. And then, if execution is as cheap and easy as they claim, then anyone can replicate the idea without having to engage with the person in the first place.
Fair enough. I know how that reads. But when anyone with a laptop and a subscription can ship production software in a weekend, the architecture and the idea start to matter a lot more. The technical details in the post are real. I just can't share the what yet. Take it or leave it.
This has been a fallacy for as long as businesses have been built, and it will still be a fallacy in the AI era.
Ideas are cheap and don't need to be protected. Your taste, execution, marketing, UX, support, and all the 1000 things that aren't the code still matter. The code will appear more quickly now: You still need to get people to use it or care about it.
I've found almost without fail that you have more to gain in sharing an idea and getting feedback (both positive and negative) before/while you build the thing than you do in protecting the idea with the fear that as soon as someone hears it they'll steal it and do it better than you.
(The exception I think is in highly competitive spaces where ideas have only a short lifetime -- eg High Frequency Trading / Wall Street in general. An idea for a trade can be worth $$ if done before someone else figures it out, and then it makes sense to protect the idea so you can make use of it first. But that's an extremely narrow domain.)
I don't think it's about ideas or even the code. It's about execution, marketing, talking to your customers and doing sales. This is something AI can't do...yet
I'm glad I am no longer in tech because I just don't want to do this.
This is not a dig at AI. If I take this article at face value, AI makes people more productive, assuming they have the taste and knowledge to steer their agents properly. And that's possibly a good thing even though it might have temporary negative side effects for the economy.
>But the AI is writing the traversal logic, the hashing layers, the watcher loops,
But unfortunately that's the stuff I like doing. And also I like communing with the computer: I don't want to delegate that to an agent (of course, like many engineers I put more and more layers between me and the computer, going from assembly to C to Java to Scala, but this seems like a bigger leap).
I'm a developer who was made redundant, and I'm now casting around for an entirely new job because, likewise, I have no interest in working with AI. It sounds boring, and the concept squicks me out, to be honest.
I work in tech, and I think the worst part is seeing all the pieces of catastrophe that have had to come together to make AI dominate
There's several factors which are super depressing:
1. Economic productivity, and what it means for a company to be successful have become detached from producing good high quality products. The stock market is the endgame now
2. AI is attempting to strongly reject the notion that developers understanding their code is good. This is objectively wrong, but its an intangible skill that makes developers hard to replace, which is why management is so desperate for it
3. Developers had too much individual power, and AI feels like a modern attempt at busting the power of the workforce rather than a genuine attempt at a productivity increase
4. It has always been possible to trade long term productivity for short term gains. Being a senior developer means understanding this tradeoff, and resisting management pressure to push something out NOW that will screw you over later
5. The only way AI saves time in the long term is if you don't review its output to understand it as well as if you'd written it yourself. Understanding the code, and the large scale architecture is critical. Its a negative time savings if you want to write high-long-term-productivity code, because we've introduced an extra step
6. Many developers simply do not care about writing good code unfortunately, you just crank out any ol' crap. As long as you don't get fired, you're doing your job well enough. Who cares about making a product anymore, it doesn't matter. AI lets you do a bad job with much less effort than before
7. None of this is working. AI is not causing projects to get pushed out faster. There are no good high quality AI projects. The quality of code is going down, not up. Open source software is getting screwed
Its an extension of the culture where performance doesn't matter. Windows is all made of react components which are each individually a web browser, because the quality of the end product no longer matters anymore. Software just becomes shittier, because none of these companies actually care about their products. AAA gaming is a good example of this, as is windows, discord, anything google makes, IBM, Intel, AMD's software etc
A lot of this is a US problem, because of the economic conditions over there and the prevalence of insane venture capitalism and union busting. I have a feeling that as the EU gets more independent and starts to become a software competitor, the US tech market is going to absolutely implode
Right now I'm working two AI-jobs. I build agents for enterprises and I teach agent development at a university. So I'm probably too deep to see straight.
But I think the future of programming is english.
Agent frameworks are converging on a small set of core concepts: prompts, tools, RAG, agent-as-tool, agent handoff, and state/runcontext (an LLM-invisible KV store for sharing state across tools, sub-agents, and prompt templates).
These primitives, by themselves, can cover most low-UX application business use cases. And once your tooling can be one-shotted by a coding agent, you stop writing code entirely. The job becomes naming, describing, and instructing and then wiring those pieces together with something more akin to flow-chart programming.
So I think for most application development, the kind where you're solving a specific business problem, code stops being the relevant abstraction. Even Claude Code will feel too low-level for the median developer.
You think prompting is here to stay? Sql has survived a long period of time. Servlets haven’t. We moved from assembly to higher languages. Flash couldn’t make it. So, im not sure for how long we will be prompting. Sure it looks great right now (just like Flash, servlets and assembly looked back then) but I think another technology will emerge that perhaps is based on promps behind the curtains but doesn’t look like the current prompting.
I would say prompting is not here to stay. It’s just temporary “tech”
> The job becomes naming, describing, and instructing and then wiring those pieces together with something more akin to flow-chart programming.
That's precisely what peoples are bad at. If people don't grasp (even intuitively) the concept of finite state machine and the difference between states and logic, LLMs are more like a wishing well (vibes) than a code generator (tooling for engineering).
Then there's the matter of technical knowledge. Software is layers of abstraction and there's already abstraction beneath. Not knowing those will limit your problem solving capabilities.
Strangely we never hear gushing pieces on how great gcc is. If you have to advertise that much or recruit people with AI mania, perhaps your product isn't that great.
> But guided? The models can write better code than most developers. That’s the part people don’t want to sit with. When guided.
Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)
I'm not asking an agent to build me a full-stack app. That's where you end up babysitting it like a kindergartener and honestly you'd be faster doing it yourself. The way I use agents is focused, context-driven, one small task at a time.
For example: i need a function that takes a dependency graph, topologically sorts it, and returns the affected nodes when a given node changes. That's well-scoped. The agent writes it, I review it, done.
But say I'm debugging a connection pool leak in Postgres where connections aren't being released back under load because a transaction is left open inside a retry loop. I'm not handing that to an agent. I already know our system. I know which service is misbehaving, I know the ORM layer, I know where the connection lifecycle is managed. The context needed to guide the agent properly would take longer to write than just opening the code and tracing it myself.
That's the line. If the context you'd need to provide is larger than the task itself, just do it. If the task is well-defined and the output is easy to verify, let the agent rip.
The muscle memory point is real though. i still hand-write code when I'm learning something new or exploring a space I don't understand yet. AI is terrible for building intuition in unfamiliar territory because you can't evaluate output you don't understand. But for mundane scaffolding, boilerplate, things that repeat? I don't. llife's too short to hand-write your 50th REST handler.
I don't agree with the headline of "we're all AI engineers now", but I do agree that AI is more of a multiplier than anything. If you know what you're doing, you go faster, if you don't, you're just making a mess at a record pace.
I'm not sure how this sustains though; like, I can't help but think this technology is going to dull a lot of people's skills, and other people just aren't going to develop skills in the first place. I have a feeling a couple years from now this is going to be a disaster (I don't think AGI is going to take place and I think the tools are going to get a lot more expensive when they start charging the true costs)
The issue is that you become lazy after a while and stop “leading the design”. And I think that’s ok because most of the code is just throwaway code.
You would rewrite your project/app several times by the time it’s worth it to pay attention to “proper” architecture. I wish I had these AIs 10 years ago so that I could focus on everything I wanted to build instead to become a framework developer/engineer.
I agree. I've got more lazy over time too. But the cost of creating code is so cheap... it's now less important to be perfect the first time the code hits prod (application dependant). It can be rewritten from scratch in no time. The bar for 'maintainability' is a lot lower now, because the AI has more capacity and persistence to maintain terrible code.
I'm sure plenty of people disagree with me. But I'm a good hand programmer, and I just don't feel the need to do that any more. I got into this to build things for other people, and AI is letting me do that more efficiently. Yes, I've had to give up a puritan approach to code quality.
I've been programming for literally my entire life. I love it, it's part of me, and there hasn't been more than a week in 30 years that I haven't written some code.
This is the first time that I feel a level of anxiety when I am not actively doing it. What a crazy shift that I am still so excited and enamored by the process after all of this time.
But there's also the double edged sword. I am also having a really hard time moderating my working hours, which I naturally struggle with anyway, even more. Partly because I am having so much fun and being so productive. But also because it's just so tempting to add 1 more feature, fix one more bug.
The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.
I think this is coming, alongside professional licensure for "software engineers". Every public-facing project will need someone to put a literal stamp of approval on the code, and regardless whether Claude or Codex wrote the bulk of it, it'll be that person's head on a pike when something goes wrong.
This isn't what many of us probably would have wanted, but I think the public blowback when "AI-coded" systems start failing is going to drive us there. (Note to passing hype-men: I did not say they will fail at higher rates than human-coded systems! I happen to believe this, but it is not germane to the argument - only the public perception matters here.)
So far the issue for me is that you can generate more crap by far than you can keep an eye on.
Once you have your 50k line program that does X are you really going to go in there and deeply review everything? I think you're going to end up taking more and more on trust until the point where you're hostage to the AI.
I think this is what happens to managers of course - becoming hostage to developers - but which is worse? I'm not sure.
> I can still reverse a binary tree without an LLM. I can still reason about time complexity, debug a race condition by reading the code, trace a memory leak by thinking.
A 2026 AI Engineer is a 1996 Software Architect. I don't need to be the one manually implementing the individual widgets of a system, I can delegate their implementation to developers (agents).
I'm being a little facetious, but I don't think it's far off the mark from what TFA is saying, and it matches my experience over the past few months. The worst architects we ever worked with were the ones who couldn't actually implement anything from scratch. Like TFA says, if you've got the fundamentals down and you want to see how far you can go with these new tools, play the role of architect for a change and let the agents fly.
I’ve always designed systems along the classic path: requirements → use cases → schematization. With AI, I continue in the same spirit (structure precedes prompting), but now the foundational layer of my systems is axioms and constraints, and the architecture emerges through structured prompts. Any AI on the shift is an aide in building systems that are logically grounded. This is where the “all of us as AI engineers” claim becomes subtle. Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning.
Saw the edit: I think that clarification was important.
The core point resonates with me personally. The shift isn't about writing less code, it's about where the real judgment lives. Knowing what to build, how to decompose a problem, which patterns to reach for - and critically, when the model is confidently wrong. Without that foundation you're not moving faster, you're just making bad decisions faster.
The scope point resonates too. Small, well-defined tasks with verifiable output is where agents actually shine.
Without writing some code how will people really know what's right? I've supervised people before - one thinks one knows best and pontificates at them and then when one actually starts working in the codebase onself many issues become clear. If you never get your hands dirty your decisions will tend off towards badness.
The perception seems to be that AI is only causing security vulnerabilites (see: openclaw injection in npm (Clinejection)). But the article's optimistic tone much reflects my own, and if it were all bad, then nobody would be using AI. But it's mostly good, and with the benchmarks, it's a statistical fact that it helps more than it hurts. It's just math at a certain point.
> Building systems that supervise AI agents, training models, wiring up pipelines where the AI does the heavy lifting and I do the thinking. Honestly? I’m having more fun than ever.
I'm sure some people are having fun that way.
But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win.
It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.
Right. What about to K.I.S.S (Keep It Simple Stupid)? If I need a bunch of agents and various levels of orchestration to simply close a bunch of Jira tasks then we have a problem. Also, what happens in a few years when this start failing and human operators are no longer able to troubleshoot the issue, forget fixing it.
I get this. I don't think either of you is wrong. There's a real loss in not writing something from scratch and feeling it come together under your hands. I'm not dismissing that.
I have immense respect for the senior engineers who came before me. They built the systems and the thinking that everything I do now sits on top of. I learned from people. Not from AI. The engineers who reviewed my terrible pull requests, the ones who sat with me and explained why my approach was wrong. That's irreplaceable. The article is about where I think things are going, not about what everyone should enjoy.
Very much on the same page as the author, I think AI is a phenomenal accelerant.
If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too.
Maybe to the people writing the invoices for the infra you're renting, sure. Or to the people who get paid to dig you out of the consequences you inevitably bring about. Remember, the faster the timescale, the worse we are wired to effectively handle it as human beings. We're playing with a fire that catches and spreads so fast, by the time anyone realizes the forest is catching and starting to react, the entire forest is already well on the way to joining in the blaze.
> We're playing with a fire that catches and spreads so fast, by the time anyone realizes the forest is catching and starting to react, the entire forest is already well on the way to joining in the blaze.
I suspect this has been said in one form or another since the discovery of fire itself.
Maybe I'm entirely out of the loop and a complete idiot, but I am really not sure at all what people mean when they talk about this stuff. I use AI agents every day, but people who say they spend 'most of my time writing agents and tools' must be living in an absolutely different world.
I don't understand how people are making anything that has any level of usefulness without a feedback loop with them at the center. My agents often can go off for a few minutes, maybe 10, and write some feature. Half of the time they will get it wrong, I realize I prompted wrong, and I will have to re-do it myself or re-do the prompt. A quarter of the time, they have no idea what they're doing, and I realize I can fix the issue that they're writing a thousand lines for with a single line change. The final quarter of the time I need to follow up and refine their solution either manually or through additional prompting.
That's also only a small portion of my time... The rest is curating data (which you've pretty much got to do manually), writing code by hand (gasp!), working on deployments, and discussing with actual people.
Maybe this is a limitation of the models, but I don't think so. To get to the vision in my head, there needs to be a feedback loop... Or are people just willing to abdicate that vision-making to the model? If you do that, how do you know you're solving the problem you actually want to?
This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out.
I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.
FWIW I reported your post to the mods because it reads completely AI generated to me. My judgement was that it might have been slightly edited but is largely verbatim LLM output.
Some tells that you might wanna look at in your writing, if you truly did write it yourself without Any LLM input are these contrarian/pivoting statements. Your post is full of these and it is imo the most classic LLM writing tell atm. These are mostly variants of the 'Its not X but Y" theme:
- "Not whether they've adopted every tool, but whether they're curious"
- "I still drive the intuition. The agents just execute at a speed I never could alone."
- "The model doesn't save you from bad decisions. It just helps you make them faster."
- "That foundation isn't decoration. It's the reason the AI is useful to me in the first place."
- "That's not prompting. That's engineering"
It is also telling that the reader basically cant take a breather most of the sentences try to emphasize harder than the last one. There is no fluff thought, no getting side tracked. It reads unnatural, humans do not think like this usually.
Yours is maybe the first good post on managing a team of AIs that I've read. There is no spoon.
I've been shifting from being the know-it-all coder who fixes all of the problems to a middle manager of AIs over the past few months. I'm realizing that most of what I've been doing for the last 25 years of my career has largely been a waste of time, due to how the web went from being an academic pursuit to a profit-driven one. We stopped caring about how the sausage was made, and just rewarded profit under a results-driven economic model. And those results have been self-evidently disastrous for anyone who cares about process or leverage IMHO. So I ended up being a custodian solving other people's mistakes which I would never make, rather than architecting elegant greenfield solutions.
For example, we went from HTML being a declarative markup language to something imperative. Now rather than designing websites like we were writing them in Microsoft Word and exporting them to HTML, we write C-like code directly in the build product and pretend that's as easy as WYSIWYG. We have React where we once had content management systems (CMSs). We have service-oriented architectures rather than solving scalability issues at the runtime level. I could go.. forever. And I have in countless comments on HN.
None of that matters now, because AI handles the implementation details. Now it's about executive function to orchestrate the work. An area I'm finding that I'm exceptionally weak in, due to a lifetime of skirting burnout as I endlessly put out fires without the option to rest.
So I think the challenge now is to unlearn everything we've learned. Somehow, we must remember why we started down this road in the first place. I'm hopeful that AI will facilitate that.
Anyway, I'm sure there was a point I was making somewhere in this, but I forgot what it was. So this is more of a "you're not alone in this" comment I guess.
Edit: I remembered my point. For kids these days immersed in this tech matrix we let consume our psyche, it's hard to realize that other paradigms exist. Much easier to label thinking outside the box as slop. In the age of tweets, I mean x's or whatever the heck they are now, long-form writing looks sus! Man I feel old.
“Hey Claude, you have a bunch of skills defined, some mcps, and memory filled with useful stuff. I want to use you on a machine accessible over SSH at <host>, can you clone yourself over?”
> The problem is: you can’t justify this throughput to someone who doesn’t understand real software engineering. They see the output and think “well the AI did it.” No. The AI executed it. I designed it. I knew what to ask for, how to decompose the problem, what patterns to use, when the model was going off track, and how to correct it. That’s not prompting. That’s engineering.
That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow.
The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping.
I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).
I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization.
That would've taken me 3 months a year ago, just to learn the syntax and evaluate competing options. Now I can get sccache working in a day, find it doesn't scale well, and replace it with recc + buildbarn. And ask the AI questions like whether we should be sharding the CAS storage.
The downside is the AI is always pushing me towards half-assed solutions that didn't solve the problem. Like just setting up distributed caching instead of compilation. It also keeps lying which requires me to redirect & audit its work. But I'm also learning much more than I ever could without AI.
> I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization.
Dunning-Kruger as a service. Thank God software engineers are not in charge of building bridges.
I think he is absolutely right. But what if he is not right? Then he is also absolutely right. He is just always absolutely right right?. Even when he is not right? Yes he is always absolutely right.
I agree wholeheartedly with all that is said in this article. When guided, AI amplifies the productivity of experts immensely.
There are two problems left, though.
One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable.
The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die).
What comes after? How do we produce experts in the age of AI?
The instincts can absolutely be developed faster with AI — if you set it up right. I work with an AI partner daily and one thing I've noticed is that it's a brutal mirror: it exposes gaps in your thinking immediately because it does exactly what you tell it, not what you meant.
That feedback loop, hundreds of times a day, compresses years of learning into months. The catch is you need guardrails — tests that fail when the AI drifts, review cycles you can't skip, architecture constraints it must respect.
That's what builds the instincts: not the AI doing the work for you, but the AI showing you where your understanding breaks down, fast enough that you actually learn from it. Just-In-Time Learning.
This is the question I keep coming back to. I don't have a clean answer yet.
The foundation I built came from years of writing bad code and understanding why it was bad. I look at code I wrote 10 years ago and it's genuinely terrible. But that's the point. It took time, feedback, reading books, reviewing other people's work, failing, and slowly building the instinct for what good looks like. That process can't be skipped.
If AI shortens the path to output, educators have to double down on the fundamentals. Data structures, systems thinking, understanding why things break. Not because everyone needs to hand-write a linked list forever, but because without that foundation you can't tell when the AI is wrong. You can't course-correct what you don't understand.
Anyone can break into tech. That's a good thing. But if someone becomes a purely vibe-coding engineer with no depth, that's not on them. That's on the companies and institutions that didn't evaluate for the right things. We studied these fundamentals for a reason. That reason didn't go away just because the tools got better.
People always learn the things they need to learn.
Were people clutching their pearls about how programmers were going to lack the fundamentals of assembly language after compilers came along? Probably, but it turned out fine.
People who need to program in assembly language still do. People who need to touch low-level things probably understand some of it but not as deeply. Most of us never need to worry about it.
I don't think the comparison (that's often made) between AI and compilers is valid though.
A compiler is deterministic. It's a function; it transforms input into output and validates it in the process. If the input is incorrect it simply throws an error.
AI doesn't validate anything, and transforms a vague input into a vague output, in a non-deterministic way.
A compiler can be declared bug-free, at least in theory.
But it doesn't mean anything to say that the chain 'prompt-LLM-code' is or isn't "correct". It's undecidable.
>People always learn the things they need to learn.
No, they don't. Which why a huge % of people are functionaly illiterate at the moment, know nothing about finance and statistics and such and are making horrendous decisions for their future and their bottom line, and so on.
There is also such a thing as technical knowledge loss between generations.
I find really sad how people are so stubborn to dismiss AI as a slop generator.
I completely agree with the author, once you spend the time building a good enough harness oh boy you start getting those sweet gains, but it takes a lot of time and effort but is absolutely worth it.
what about the environmental impact of AI, especially agentic AI? I keep reading praise for AI on the orange site, but its environmental impact is rarely discussed. It seems that everyone has already adopted this technology, which is destroying our world a little more.
I believe the orange site's consensus was that it's approximately one additional mini fridge or dish washer worth of consumption on average. You've got users who use these tools barely 1k tokens per week. Assuming it's all batched ideally that's like running an LED floodlight for a minute or so. The other end of the spectrum can be pretty extreme in consumption but it's also rare. Most people just use the adhoc stuff.
The environmental impact of AI replacing a human programmer is orders of magnitude lower than the environmental impact of that programmer. Look up average US water consumption and CO2 emissions per capita.
And then add on top the environmental impact of all of the money that programmer gets from programming - travels around the world, buying large houses, ...
If you care about the environment, you should want AI's replacing humans at most jobs so that they can no longer afford traveling around the world and buying extravagant stuff.
Yes the environmental impact of an AI agent performing a given task is lower. However we will not simply replace every programmer with an agent: in the process we will use more agents exceeding the previous environmental impact of humans. This is the rebound effect [0].
Your reasoning could be effective if we bounded the computing resources usable by all AI in order to meet carbon reduction goals.
>The environmental impact of AI replacing a human programmer is orders of magnitude lower than the environmental impact of that programmer. Look up average US water consumption and CO2 emissions per capita.
The programmer will continue to exist as a consumer of those things even if they get replaced by AI in their job.
The phrase "shape up or ship out" is an apt one I've heard. Agentic AI is a core part of software engineering. Either you are learning and using these tools, or you're not a professional and don't belong in the field.
Seems strange, for decades we allowed developers to use what made them comfortable, you like notepad? go ahead and use it. Don't want an LSP? that's fine disable it.
So long as their productivity was on par with the rest of the team there was no issue.
Suddenly, everyone needs to use this new tool (which we haven't proven to actually be effective) and if you don't you don't belong in the industry.
> So long as their productivity was on par with the rest of the team there was no issue.
Emphasis added. And anyway, for most software dev in most shops it wasn't true; most development takes place in whatever IDE the group/organization standardized on for the task, to make sure everyone gets proper tooling and to make collaboration and information sharing easier. Think of all the Java enterprise software developed by legions of drones in the 2000s and 2010s. They all used Eclipse, because Eclipse is what they were given.
It's only with the emergence of whiny, persnickety Unix devs who refused to leave the comforting embrace of their editor of choice that shops in the internet/dotcom/startup tradition embraced a "use whatever tools you want" philosophy. They had uncharacteristically enormous leverage over the tech stack being deployed in such businesses and could force employers to make that concession. And anyway, what some of them could do with vi blew the boss's mind.
It is true that we don't have a whole lot of hard data from large organizations that show AI productivity improvements. But absence of evidence is not evidence of absence. Turns out, most large organizations just haven't adopted AI in the amount and ways that could make a big impact.
But we have enough anecdata from competent developers to suggest that the productivity gains are huge. So big, AI not only lets you do your normal tasks many times faster, it puts projects within reach that you would not have countenanced before because they were too complex or tedious to be worth the payoff.
So no. Refusing to use AI is just pure bloodymindedness at this point—like insisting on using a keypunch while everyone around you discovers the virtues of CRT terminals and timesharing. There were people like this even in the 1970s when IBM finally came around and made timesharing available in their mainframes. Those people either got up to speed or moved on to a different profession. They couldn't keep working the way they'd been working because the productivity expectations changed with the availability of new technology.
Not a day goes by that a fellow engineer doesn't text me a screenshot of something stupid an AI did in their codebase. But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
The catch about the "guided" piece is that it requires an already-good engineer. I work with engineers around the world and the skill level varies a lot - AI has not been able to bridge the gap. I am generalizing, but I can see how AI can 10x the work of the typical engineer working in Startups in California. Even your comment about curiosity highlights this. It's the beginning of an even more K-shaped engineering workforce.
Even people who were previously not great engineers, if they are curious and always enjoyed the learning part - they are now supercharged to learn new ways of building, and they are able to try it out, learn from their mistakes at an accelerated pace.
Unfortunately, this group, the curious ones, IMHO is a minority.
I am going to try to put this kindly: it is very glib, and people will find it offensive and obnoxious, to implicitly round off all resistance or skepticism to incuriosity. Perhaps to alienate AI critics even further is the goal, in which case - carry on.
But if you are genuinely confused by the attitudes of your peers, try asking not "what do I have that they lack" ("curiosity"?) but "what do they see that I don't" or "what do they care about that I don't"? Is it possible that they are not enthusiastic for the change in the nature of the work? Is it possible they are concerned about "automation complacency" setting in, precisely _because_ of the ratio of "hundreds of times" writing decent code to the one time writing "something stupid", and fear that every once in a while that "something stupid" will slip past them in a way that wipes the entire net gain of AI use? Is it possible that they _don't_ feel that the typical code is "better than most engineers can write"? Is it possible they feel that the "learning" is mostly ephemera - how much "prompt engineering" advice from a year ago still holds today?
You have a choice, and it's easy to label them (us?) as Luddites clinging to the old ways out of fear, stupidity, or "incuriosity". If you really want to understand, or even change some minds, though, please try to ask these people what they're really thinking, and listen.
My feeling is that the code it generates is locally ok, but globally kind of bad. What I mean is, in a diff it looks ok. But when you start comparing it to the surrounding code, there's a pretty big lack of coherency and it'll happily march down a very bad architectural path.
In fairness, this is true of many human developers too.. but they're generally not doing it at a 1000 miles per hour and they theoretically get better at working with your codebase and learn. LLMs will always get worse as your codebase grows, and I just watched a video about how AGENTS.md actually usually results in worse outcomes so it's not like you can just start treating MD files as memory and hope it works out.
I don't think that people who don't want to use these tools or clean old ways are incurious. But I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point. Not in the future, but now. It's just that the adoption of these tools isn't evenly distributed yet.
I think there's a place for thoughtful dialogue around what this means for software engineering, but I don't think that's going to change anything at this point. If developers just don't want to participate in this new world, for whatever reason, I'm not judging them, but also I don't think the genie is going back in the bottle. There will be no movement to organize labor to protect us and there be no deus ex machina that is going to reverse course on this stuff.
12 replies →
I read the parent comment as calling the majority of AI users "incurious", and not referring to us who resist AI for whatever reasons. The curious AI users can obtain self-improvement, the incurious ones want money or at least custom software without caring how its made.
I don't want the means of production to be located inside companies that can only exist with a steady bubble of VC dollars. It's perfectly reasonable to try AI or use it sparingly, but not embrace it for reasons that can be articulated. Not relevant to parent commenters point, though. Maybe you are "replying" to the article?
Underlying this and similar arguments is the presumption that the "old way" was perfect. You or your colleagues weren't doing one mistake per 100 successful commits. I have been in an industry for decades, and I can tell you that I do something stupid when writing code manually quite often. The same goes for the people that I work with. So fear that the LLM will make mistakes can't really be the reason. Or if it is the reason, it isn't a reasonable objection.
you make it seem like ai hesitation is a misunderstood fringe position, but it's not. i don't think anyone is confused about why some people are uninterested in ai tooling, but we do think you're wrong and the defensive posturing lines in the sand come off as incredibly uncurious.
I'd argue these are good questions to ask in general, about many topics. That it's an essential skill of an engineer to ask these types of questions.
There's two critical mistake that people often make: 1) thinking there's only one solution to any given problem, and 2) that were there an absolute optima, that they've converged into the optimal region. If you carefully look at many of the problems people routinely argue about you'll find that they often are working under different sets of assumptions. It doesn't matter if it's AI vs non-AI coding (or what mix), Vim vs Emacs vs VSCode, Windows vs Mac vs Linux, or even various political issues (no examples because we all know what will happen if I do, which only illustrates my point). There are no objective answers to these questions, and global optima only have the potential to exist when highly constraining the questions. The assumptions are understood by those you closely with, but that breaks down quickly.
If your objective is to seek truth you have to understand the other side. You have to understand their assumptions and measures. And just like everyone else, these are often not explicitly stated. They're "so obvious" that people might not even know how to explicitly state them!
But if the goal is not to find truth but instead find community, then don't follow this advice. Don't question anything. Just follow and stay in a safe bubble.
We can all talk but it gets confusing. Some people argue to lay out their case and let others attack, seeking truth, updating their views as weaknesses are found. Others are arguing to social signal and strengthen their own beliefs, changing is not an option. And some people argue just because they're addicted to arguing, for the thrill of "winning". Unfortunately these can often look the same, at least from the onset.
Personally, I think this all highlights a challenge with LLMs. One that only exasperates the problem of giving everyone access to all human knowledge. It's difficult you distinguish fact from fiction. I think it's only harder when you have something smooth talking and loves to use jargon. People do their own research all the time and come to wildly wrong conclusions. Not because they didn't try, not because they didn't do hard work, and not because they're specifically dumb; but because it's actually difficult to find truth. It's why you have PhD level domain experts disagree on things in their shared domain. That's usually more nuanced, but that's also at a very high level of expertise.
I am solidly in this "curious" camp. I've read HN for the past 15(?) years. I dropped out of CS and got an art agree instead. My career is elsewhere, but along the way, understanding systems was a hobby.
I always kind of wanted to stop everything else and learn "real engineering," but I didn't. Instead, I just read hundreds (thousands?) of arcane articles about enterprise software architecture, programming language design, compiler optimization, and open source politics in my free time.
There are many bits of tacit knowledge I don't have. I know I don't have them, because I have that knowledge in other domains. I know that I don't know what I don't know about being a "real engineer."
But I also know what taste is. I know what questions to ask. I know the magic words, and where to look for answers.
For people like me, this feels like an insane golden age. I have no shortage of ideas, and now the only thing I have is a shortage of hands, eyes, and on a good week, tokens.
So from my perspective as a professional programmer, my feeling is good on you, like, you're empowered to make things and you're making them. It reminds me of people making PHP sites when the web was young and it was easier to do things.
I think where I get really irritated with the discourse is when people find something that works for them, kinda, and they're like "WELL THIS IS WHAT EVERYONE HAS TO DO NOW!" I wouldn't care if I felt like "oh, just a rando on the internet has a bad opinion", the reason this subject bothers me is words do matter and when enough people are thoughtlessly on the hype train it starts creating a culture shift that creates a lot of harm. And eventually cooler heads prevail, but it can create a lot of problems in the meantime. (Look at the damage crypto did!)
But that knowledge was never hidden or out of reach. Why not read books, manuals, or take online classes? There is free access to all these things, the only cost is time and energy.
Everyone has tons of ideas. But every good engineer (and scientist) also knows that most of our ideas fall apart when either thinking deeper or trying to implement it (same thing, just mental or not). Those nuances and details don't go away. They don't matter any less. They only become less visible. But those things falling apart is also incredibly valuable. What doesn't break is the new foundation to begin again.
The bottleneck has never been a shortage of ideas nor the hands to implement them. The bottleneck has always been complexity. As the world advances do does the complexity needed to improve it.
I don't mean to be rude, but you write like a chatbot. This makes sense, to be honest.
1 reply →
You think you know what taste is. Have you been cranking on real systems all these years, or have you been on the sidelines armchairing the theoretics? I'm not trying to come across as rude, but it may be unavoidable to some degree when indirect criticism becomes involved. A laboring engineer has precious little choice in the type of systems available on which to work on. Fundamentally, it's all going to be some variant of system to make money for someone else somehow, or system that burns money, but ensures necessary work gets done somehow. That's it. That's the extent of the optimization function as defined by capitalism. Taste, falls by the wayside, compared to whether or not you are in the context of the optimizers who matter, because they're at the center of the capital centralization machine making the primary decisions as to where it gets allocated, is all that matters these days. So you make what they want or you don't get paid. As an Arts person, you should understand that no matter how sublime the piece to the artist, a rumbling belly is all that currently awaits you if your taste does not align with the holders of the fattest purses to lighten. I'm not speaking from a place of contempt here; I have a Philosophy background, and reaching out as one individual of the Humanities to another. We've lost sight of the "why we do things" and let ourselves become enslaved by the balance sheets. The economy was supposed to serve the people, it's now the other way around. All we do is feed more bodies to the wood chipper. Until we wake up from that, not even the desperate hope in the matter of taste will save us. We'll just keep following the capital gradient until we end up selling the world from under ourselves because it's the only thing we have left, and there is only the usual suspects as buyers.
3 replies →
[flagged]
3 replies →
Ok fella. But show me something then. This is all talk.
Personally I have been able to produce a very good output with Grok in relation to a video. However, it was insanely painful and very annoying to produce. In retrospect I would've much preferred to have hired humans.
Not to mention I used about 50 free-trial Grok accounts, so who knows what the costs involved were? Tens of thousands no doubt.
Standard AI promotion talking points. Show us the frigging code or presumably your failed slow website that looks like a Bootcamp website from 2014.
But that's the problem. Something that can be so reliable at times, can also fail miserably at others. I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load. You're not just coding anymore, you're thinking about what needs to be done, and then reviewing it as if someone else wrote the code.
LLMs are great for rapid prototyping, boilerplate, that kind of thing. I myself use them daily. But the amount of mistakes Claude makes is not negligible in my experience.
> I've seen this in myself and colleagues of mine, where LLM use leads to faster burnout and higher cognitive load.
This needs more attention. There's a lot of inhumanity in the modern workplace and modern economy, and that needs to be addressed.
AI is being dumped into the society of 2026, which is about extracting as much wealth as possible for the already-wealthy shareholder class. Any wealth, comfort, or security anyone else gets is basically a glitch that "should" be fixed.
AI is an attempt to fix the glitch of having a well-compensated and comfortable knowledge worker class (which includes software engineers). They'd rather have what few they need running hot and burning out, and a mass of idle people ready to take their place for bottom-dollar.
This is a fair observation, and I think it actually reinforces the argument. The burnout you're describing comes from treating AI output as "your code that happens to need review." It's not. It's a hypothesis. Once you reframe it that way, the workflow shifts: you invest more in tests, validation scenarios, acceptance criteria, clear specs. Less time writing code, more time defining what correct looks like. That's not extra work on top of engineering. That is the engineering now. The teams I've seen adapt best are the ones that made this shift explicit: the deliverable isn't the code, it's the proof that the code is right.
This is a fair point. The cognitive load is real. Reviewing AI output is a different kind of exhausting than writing code yourself.
Even when the output is "guided," I don't trust it. I still review every single line. Every statement. I need to understand what the hell is going on before it goes anywhere. That's non-negotiable. I think it gets better as you build tighter feedback loops and better testing around it, but I won't pretend it's effortless.
You are correct, but this is not a new role. AI effectively makes all of us tech leads.
Prototyping is a perfectly fine use of LLMs - its easier to see a closer-to-finished good than one that is not.
But that won't generate the returns Model producers need :) This is the issue. So they will keep pushing nonsense.
One issue is that developers have been trained for the past few decades to look for solutions to problems online by just dumping a few relevant keywords into Google. But to get the most out of AI you should really be prompting as if you were writing a formal letter to the British throne explaining the background of your request. Basic English writing skills, and the ability to formulate your thoughts in a clear manner, have become essential skills for engineering (and something many developers simply lack).
You are correct. You absolutely must fill the token space with unanbiguous requirements, or Claude will just get "creative". You don't want the AI to do creative things in the same way you don't want an intern to do the same.
That said, I have found that I can get a lot of economy from speaking in terms of jargon, computer science formalisms, well-documented patterns, and providing code snippets to guide the LLM. It's trained on all of that, and it greatly streamlines code generation and refactoring.
Amusingly, all of this turns the task of coding into (mostly) writing a robust requirements doc. And really, don't we all deserve one of those?
> But to get the most out of AI you should really be prompting as if you were writing a formal letter to the British throne explaining the background of your request. Basic English writing skills, and the ability to formulate your thoughts in a clear manner, have become essential skills for engineering (and something many developers simply lack).
That's probably why spec driven development has taken off.
The developers who can't write prompts now get AI to help with their English, and with clarifying their thoughts, so that other AI can help write their code.
> the ability to formulate your thoughts in a clear manner, have become essential skills for engineering
<Insert astronauts meme “Always has been”>
Dijkstra (1970) "Notes On Structured Programming" (EWD249), Section 3 ("On The Reliability of Mechanisms"), p. 7.
And
Dijkstra (1976-79) On the foolishness of "natural language programming" (EWD 667)
1 reply →
Engineers will go back in and fix it when they notice a problem. Or find someone who can. AI will send happy little emoji while it continues to trash your codebase and brings it to a state of total unmaintainability.
I agree on the curiosity part, I have a non CS background but I have learned to program just out of curiosity. This led me to build production applications which companies actually use and this is before the AI era.
Now, with AI I feel like I have an assistant engineer with me who can help me build exciting things.
I'm currently teaching a group of very curious non-technical content creators at one of the firms I consult at. I set up Codex for them, created the repo to have lots of hand-holding built in - and they took off. It's been 4 weeks and we already have 3 internal tools deployed, one of which eliminated the busy work of another team so much that they now have twice the capacity. These are all things 'real' engineers and product managers could have done, but just empowering people to solve their own problems is way faster. Today, several of them came to me and asked me to explain what APIs are (They want to use the google workspace APIs for something)
I wrote out a list of topics/key words to ask AI about and teach themselves. I've already set up the integration in an example app I will give them, and I literally have no idea what they are going to build next, but I'm .. thrilled. Today was the first moment I realized, maybe these are the junior engineers of the future. The fact that they have nontechnical backgrounds is a huge bonus - one has a PhD in Biology, one a masters in writing - they bring so much to the process that a typical engineering team lacks. Thinking of writing up this case study/experience because it's been a highlight of my career.
Your experience is the exact opposite of mine. I have people constantly telling me how LLMs are perfectly one shotting things. I see it from friend groups, coworkers, and even here on HN. It's also what the big tech companies are often saying too.
I'm sorry, but to say that nobody is talking about success and just concentrating on failure is entirely disingenuous. You claim the group is a minority, yet all evidence points otherwise. The LLM companies wouldn't be so successful if people didn't believe it was useful.
> But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Because the instances of this happening are a) random and b) rarely ever happening ?
This is my experience too. Also, the ones not striving for simplicity and not architecting end up with giant monsters that are very unstable and very difficult to update or make robust. They usually then look for another engineer to solve their mess. Usually, the easy way for the new engineer is just to architect and then turbo-build with Claude Code. But they are stuck in sunk cost prison with their mess and can't let it go :(
The K-shaped workforce point is sharp and I think you're right. The curious ones are a minority, but they've always been the ones who moved things forward. AI just made the gap more visible :)
Your Codex case study with the content creators is fascinating. A PhD in Biology and a masters in writing building internal tools... that's exactly the kind of thing i meant by "you can learn anything now." I'm surrounded by PhDs and professors at my workplace and I'm genuinely positive about how things are progressing. These are people with deep domain expertise who can now build the tools they need. It's an interesting time. please write that up...
When AI screws up, it's "stupid." When AI succeeds, I'm smart.
It's some cousin of the Fundamental Attribution Error.
Quite frankly, if AI can write better code than most of your engineers "hundreds of times", then your hiring team is doing something terribly wrong.
Maybe. The reality of software engineering is that there's a lot of mediocre developers on the market and a lot of mediocre code being written; that's part of the industry, and the jobs of engineers working with other engineers and/or LLMs is that of quality control, through e.g. static analysis, code reviews, teaching, studying, etc.
8 replies →
The "most engineers" not "most engineers we've hired".
But also "most engineers" aren't very good. AIs know tricks that the average "I write code for my dayjob" person doesn't know or frankly won't bother to learn.
11 replies →
>But no one ever mentions the hundreds of times it quietly wrote code that is better than most engineers can write.
Are you serious? I've been hearing this constantly. since mid 2025.
The gaslighting over AI is really something else.
Ive also never seen jobs advertised before whose job was to lobby skeptical engineers over about how to engage in technical work. This is entirely new. There is a priesthood developing over this.
I wrote code by hand for 20 years. Now I use AI for nearly all code. I just can’t compete in speed and thoroughness. As the post says, you must guide the AI still. But if you think you can continue working without AI in a competitive industry, I am absolutely sure you will eventually have a very bad time.
2 replies →
Their story is clearly fake. No one is getting screenshots of broken code texted to them so often that it's daily, and if they did, everyone must hate them.
you’ve been hearing that since mid 2025 bc that’s when it became true.
< I enjoy writing code. Let me get that out of the way first.
< I haven’t written a boilerplate handler by hand in months. I haven’t manually scaffolded a CLI in I don’t know how long. I don’t miss any of it.
Sounds like the author is confused or trying too hard to please the audience. I feel software engineering has higher expectation to move faster now, which makes it more difficult as a discipline.
I personally code data structures and algorithms for 1 - 2 hrs a day, because I enjoy it. I find it also helps keeps me sharp and prevents me from building too much cognitive debt with AI generated code.
I find most AI generated code is over engineered and needs a thorough review before being deployed into production. I feel you still have to do some of it yourself to maintain an edge. Or at least I do at my skill level.
They will never admit it, but many are scared of losing their jobs.
This threat, while not yet realized, is very real from a strictly economic perspective.
AI or not, any tool that improves productivity can lead to workforce reduction.
Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
You have a choice: fire 5 people, or produce 2,000 loaves per month. But does the city really need that many loaves?
To make matters worse, all your competitors also have the same semi-automatic ovens...
> Consider this oversimplified example: You own a bakery. You have 10 people making 1,000 loaves of bread per month. Now, you have new semi-automatic ovens that allow you to make the same amount of bread with only 5 people.
That is actually the case with a lot of bakeries these days. But the one major difference being,the baker can rely with almost 100% reliability that the form, shape and ingredients used will be exact to the rounding error. Each time. No matter how many times they use the oven. And they don't have to invent strategies on how to "best use the ovens", they don't claim to "vibe-bake" 10x more than what they used to bake before etc... The semi-automated ovens just effing work!
Now show me an LLM that even remotely provides this kind of experience.
Eh accuracy and reliability is a different topic hashed out many times on HN. This thread is about productivity. I’m a staff engineer and I don’t know a single person not using AI. My senior engineers are estimating 40% gains in productivity.
A bit simplistic. The bakery can just expand its product range or do various other things to add work. In fact that's exactly what I would expect to happen at a tech company, ceteris paribus.
This is what I find interesting - the response from most companies is "we will need fewer engineers because of AI", not "we can build more things because of AI".
What is driving companies to want to get rid of people, rather than do more? Is it just short-term investor-driven thinking?
5 replies →
A market has to exist for this expanded range and for the expanded ranges of every other bakery. Otherwise the bakery's just wasting flour.
Where is this expanded demand coming from?
2 replies →
On another note, if you had 100 engineers and you lay almost all of them off and keep 5 super-AI-accelerated engineers, and your competitor keeps 50 of such engineers, your competitor is still able to iterate 10x as fast. So you still lay people off at the risk of falling behind.
Writing software isn't like a small bakery with fixed demand. There are always more features to build and improvements to do than capacity allows. For better or worse software products are never finished.
Maybe the bakery expands to make more than just loaves of bread, maybe different cakes, sandwiches, maybe expand delivery to nearby towns.
I don't think it's valid to reduce the act of creating software to an assembly line, especially with Amdahl's law.
[dead]
"You can learn anything now. I mean anything." This was true before before LLMs. What's changed is how much work it is to get an "answer". If the LLM hands you that answer, you've foregone learning that you might otherwise have gotten by (painfully) working out the answer yourself. There is a trade-off: getting an answer now versus learning for the future. I recently used an LLM to translate a Linux program to Windows because I wanted the program Right Now and decided that was more important than learning those Windows APIs. But I did give up a learning opportunity.
I'm conflicted about this. On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you. Sure, they were available before, but maybe in textbooks you needed to pay for (how quaint), or on websites that appeared on the fifth page of search results. Whatever are the externalities of that, in the short term, that part may be a net positive for learners.
On the other hand, learning is doing; if it's not at least a tiny bit hard, it's probably not learning. This is not strictly an LLM problem; it's the same issue I have with YouTube educators. You can watch dazzling visualizations of problems in mathematics or physics, and it feels like you're learning, but you're probably not walking away from that any wiser because you have not flexed any problem-solving muscles and have not built that muscle memory.
I had multiple interactions like that. Someone asked an LLM for an ELI5 and tried to leverage that in a conversation, and... the abstraction they came back feels profound to them, but is useless and wrong.
This. I feel this all the time. I love 3Blue1Brown's videos and when I watch them I feel like I really get a concept. But I don't retain it as well as I do things I learned in school.
It's possible my brain is not as elastic now in my 40s. Or maybe there's no substitute for doing something yourself (practice problems) and that's the missing part.
One factor in favor of the use of LLM as a learning tool is the poor quality of documentation. It seems we've forgotten how to write usable explanations that help readers to build a coherent model of the topic at hand.
> On one hand, I think LLMs make it easier to discover explanations that, at least superficially, superficially "click" for you.
The other benefit is that LLMs, for superficial topics, are the most patient teachers ever.
I can ask it to explain a concept multiple times, hoping that it'll eventually click for me, and not be worried that I'd look stupid, or that it'll be annoyed or lose patience.
> learning is doing;
I could not agree more.
It always comes down to economics and then the person and their attitude towards themselves.
Some things are worth learning deeply, in other cases the easy / fast solution is what the situation calls for.
I've thought recently that some kinds of 'learning' with AI are not really that different from using Cliffs Notes back in the day. Sometimes getting the Cliffs Notes summary was the way to get a paper done OR a way to quickly get through a boring/challenging book (Scarlet Letter, amirite?). And in some cases reading the summary is actually better than the book itself.
BUT - I think everyone could agree that if you ONLY read Cliffs Notes, you're just cheating yourself out of an education.
That's a different and deeper issue because some people simply do not care to invest in themselves. They want to do minimum work for maximum money and then go "enjoy themselves."
Getting a person to take an interest in themselves, in their own growth and development, to invite curiosity, that's a timeless problem.
So I've actually been putting more effort into deliberate practice since I started using AI in programming.
I've been a fan of Zed Shaw's method for years, of typing out interesting programs by hand. But I've been appreciating it even more now, as a way to stave off the feeling of my brain melting :)
The gross feeling I have if I go for too long without doing cardio, is a similar feeling to when I go for too long without actually writing a substantial amount of code myself.
I think that the feeling of making a sustained effort is itself something necessary and healthy, and rapidly disappearing from the world.
1 reply →
I’ve always like the essential/accidental complexity split. It can be hard to find, but for a problem solving perspective, it may defines what’s fun and what’s a chore.
I’ve been reading the OpenBSD lately and it’s quite nice how they’ve split the general OS concepts from the machine dependent needs. And the general way they’ve separated interfaces and implementation.
I believe that once you’ve solve the essential problem, the rest becomes way easier as you got a direction. But doing accidental problem solving without having done the essential one is pure misery.
That's not what the author means. Multiple times a day, I have conversations with LLMs about specific code or general technologies. It is very similar to having the same conversation with a colleague. Yes, the LLM may be wrong. Which is why I'm constantly looking at the code myself to see if the explanation makes sense, or finding external docs to see if the concepts check out.
Importantly, the LLM is not writing code for me. It's explaining things, and I'm coming away with verifiable facts and conceptual frameworks I can apply to my work.
Yeah, it's a great way for me to reduce activation energy to get started on a specific topic. Certainly doesn't get me all the way home, but cracks it open enough to get started.
I kinda wonder to what extent grad students’ experience grading projects and homework will end up being a differentiating skill. 75% kidding.
My solution to this is to prioritize. There isn't enough time in a person's life to learn everything anyways.
Selectively pick and struggle through things you want to learn deeply. And let AI spoon-feed you for things you don't care as much about.
I've managed to go my whole career using regex and never fully grokking it, and now I finally feel free to never learn!
I've also wanted to play with C and Raylib for a long time and now I'm confident in coding by hand and struggling with it, I just use LLMs as a backstop for when I get frustrated, like a TA during lab hours.
3 replies →
I am beginning to disagree with this, or at least I am beginning to question its universal truth. For instance, there are so many times when "learning" is an exercise at attempting to apply wrong advice many times until something finally succeeds.
For instance, retrieving the absolute path an Angular app is running at in a way that is safe both on the client and in SSR contexts has a very clear answer, but there are a myriad of wrong ways people accomplish that task before they stumble upon the Location injectable.
In cases like the above, the LLM is often able to tell you not only the correct answer the first time (which means a lot less "noise" in the process trying to teach you wrong things) but also is often able to explain how the answer applies in a way that teaches me something I'd never have learned otherwise.
We have spent the last 3 decades refining what it means to "learn" into buckets that held a lot of truth as long as the search engine was our interface to learning (and before that, reading textbooks). Some of this rhetoric begins to sound like "seniority" at a union job or some similar form of gatekeeping.
That said, there are also absolutely times (and sometimes it's not always clear that a particular example is one of those times!!) when learning something the "long" way builds our long term/muscle memory or expands our understanding in a valuable way.
And this is where using LLMs is still a difficult choice for me. I think it's less difficult a choice for those with more experience, since we can more confidently distinguish between the two, but I no longer think learning/accomplishing things via the LLM is always a self-damaging route.
AI gave you the option of making it happen without learning anything.
It also gives you an avenue to accelerate your learning if that is your goal.
I learn a lot faster now with LLMs.
You could learn the windows APIs much faster if you wanted to learn them
Is this maybe more about the quality of the documentation? I say this 'cause my thinking is that reading is reading, it takes the same time to read the information.
How is this faster than just reading the documentation? Given that LLMs hallucinate, you have to double check everything it says against the docs anyway
21 replies →
Reminds some of something a friend said towards the end of college: “it’s only like 12 thousand dollars a year to learn everything there is to know”
Take it with a grain of salt..
[dead]
It is uncertain what will be valuable in the future at the rate things are changing.
Books are for the mentally enfeebled who can't memorize knowledge.
- Socrates
I can’t tell if this is a genuine quote or not. Can you provide a citation?
(I think something like this comes up in the Phaedrus)
Aren't books to communicate knowledge?
Written by Plato.
Wrong person you're quoting but he did not foresee the benefit of leveraging the work of others to extend and build-on-top.
I don't know, most shit I learned programming (and subsequently get paid for) is meaningless arcana. For example, Kubernetes. And for you, it's Windows APIs.
For programming in general, most learning is worthless. This is where I disagree with you. If you belong to a certain set of cultures, you overindex on this idea that math (for example) is the best way to solve problems, that you must learn all this stuff by this certain pedagogy, and that the people who are best at this are the best at solving problems, which of course is not true. This is why we have politics, and why we have great politicians who hail from cultures that are underrepresented in high levels of math study, because getting elected and having popular ideas and convincing people is the best way to solve way more problems people actually have than math. This isn't to say that procedural thinking isn't valuable. It's just that, well, jokes on you. ChatGPT will lose elections. But you can have it do procedural thinking pretty well, and what does the learning and economic order look like now? I reject this form of generalization, but there is tremendous schadenfreude about, well the math people are destroying their own relevance.
All that said, my actual expertise, people don't pay for. Nobody pays for good game design or art direction (my field). They pay because you know Unity and they don't. They can't tell (and do not pay for) the difference between a good and bad game.
Another way of stating this for the average CRUD developer is, most enterprise IT projects fail, so yeah, the learning didn't really matter anyway. It's not useful to learn how to deliver better failed enterprise IT project, other than to make money.
One more POV: the effortlessness of agentic programming makes me more sympathetic to anti intellectualism. Most people do not want to learn anything, including people at fancy colleges, including your bosses and your customers, though many fewer in the academic category than say in the corporate world. If you told me, a chatbot could achieve in hours what would take a world expert days or weeks, I would wisely spend more time playing with my kids and just wait. The waiters are winning. Even in game development (cultural product development generally). It's better to wait for these tools to get more powerful than to learn meaningless arcana.
Convincing / coercing a bunch of slaves to build a pyramid takes a leader.
But no amount of politics and charisma will calculate the motions of the planets or put satellites in orbit.
A nation needs more than just influencers and charlatans.
1 reply →
I do disagree with the notion that you have to slog through a problem to learn efficiently. That it's either "the easy way [bad, you dont learn] or the hard way [good you do learn]" is a false dichotomy. Agents / LLMs are like having an always-on, highly adept teacher who can synthesize information in an intuitive way, and that you can explore a topic with. That's extremely efficient and effective for learning. There is maybe a tradeoff somewhat in some things, but this idea that LLMs make you not learn doesn't feel right; they allow you to learn _as much as you want and about the things that you want_, which wasn't before. You had to learn, inefficiently(!), a bunch of crap you didn't want to in order to learn the thing you _did_ want to. I will not miss those days.
I don't think your saying the same thing. Ai can help you get through the hard stuff effeciently and you'll learn. It acts as a guide, but you still do the work.
Offloading completely the hard work and just getting a summary isn't really learning.
> I’m shipping in hours what used to take days. Not prototypes. Real, structured, well-architected software.
> If I don’t understand what it’s doing, it doesn’t ship. That’s non-negotiable.
Holy LinkedIn
I am running local offline small models in the old fashioned REPL style, without any agentic features. One prompt at a time.
Instead of asking for answers, I ask for specific files to read or specific command line tools with specific options. I pipe the results to a file and then load it into the CLI session. Then I turn these commands into my own scripts and documentation (in Makefile).
I forbid the model wandering around to give me tons of irrelevant markdown text or generated scripts.
I ask straight questions and look for straight answers. One line at a time, one file at a time.
This gives me plenty of room to think what I want and how I get what I want.
Learning what we want and what we need to do to achieve it is the precious learning experience that we don’t want to offload to the machine.
> I ask straight questions and look for straight answers. One line at a time, one file at a time.
I've also taken to using the Socratic Method when interrogating an LLM. No loaded questions, squeaky clean session/context, no language that is easy to misinterpret. This has worked well for me. The information I need is in there, I just need to coax it back out.
I did exactly this for an exercise a while back. I wanted to learn Rust while coding a project and AI was invaluable for accelerating my learning. I needed to know completely off-the-wall things that involved translating idioms and practices from other languages. I also needed to know more about Rust idoms to solve specific problems and coding patterns. So I carefully asked these things, one at a time, rather than have it write the solution for me. I saved weeks if not months on that activity, and I'm at least dangerous at Rust now (still learning).
This. I'm also using an LLM very similarly and treat it like a knowledgeable co-worker I can ask for an advice or check something. I want to be the one applying changes to my codebase and then running the tests. Ok, agents may improve the efficiency but it's a slippery slope. I don't want to sit here all day watching the agents modify and re-modify my codebase, I want to do this myself because it's still fun though not as much fun as it was pre-AI
And you don't know what might trigger AI into overthinking. ;-)
https://gist.github.com/ontouchstart/bc301a60067f687b65dad64...
(This is an ongoing experiment, it doesn't matter what model I use.)
Lost me at "I’m building something right now. I won’t get into the details. You don’t give away the idea."
It’s kind of funny seeing all the AI hype guys talking about their 10 OpenClaw instances all running doing work and when you ask what it is, you can never get a straight answer..
For the record though, I love agentic coding. It deals with the accumulated cruft of software for me.
> It deals with the accumulated cruft of software for me.
And creates more at record speeds!
The work is mysterious and important.
Perhaps execution is cheap now and ideas aren't?
Personally I'm quite pleased with this inversion.
As someone else implied in their comment...
If execution no longer matters, then what possible ideas exist out there that both are highly valuable as well as only valuable to the first mover? If the second person to see the value in the idea can execute it in a weekend using AI tools, what value is there in the idea to begin with?
In fact the second mover advantage seems to me to be even larger than before. Let someone else get the first version out the door, then you just point your AI bot at the resulting product to copy it in a fraction of the time it took the original person to execute on it.
If anything, ideas seem to be even cheaper to me in this new world. It probably just moves what bits of execution matter even more towards sales and marketing and hype vs. executing on the actual product itself.
I think there might be some interesting spaces here opening up in the IP combined with "physical product" space. Where you need the idea as well as real-world practical manufacturing skills in order to execute. That will still be somewhat of a moat for a little while at least, but mostly at a scale where it's not worth an actual manufacturer from China to spin up a production line to compete with you at scale.
Ideas are always cheap.
Eventually you will have to tell people what the idea is, even if it is at product launch. And then, if execution is as cheap and easy as they claim, then anyone can replicate the idea without having to engage with the person in the first place.
Ideas will never not be cheap.
Fair enough. I know how that reads. But when anyone with a laptop and a subscription can ship production software in a weekend, the architecture and the idea start to matter a lot more. The technical details in the post are real. I just can't share the what yet. Take it or leave it.
This has been a fallacy for as long as businesses have been built, and it will still be a fallacy in the AI era.
Ideas are cheap and don't need to be protected. Your taste, execution, marketing, UX, support, and all the 1000 things that aren't the code still matter. The code will appear more quickly now: You still need to get people to use it or care about it.
I've found almost without fail that you have more to gain in sharing an idea and getting feedback (both positive and negative) before/while you build the thing than you do in protecting the idea with the fear that as soon as someone hears it they'll steal it and do it better than you.
(The exception I think is in highly competitive spaces where ideas have only a short lifetime -- eg High Frequency Trading / Wall Street in general. An idea for a trade can be worth $$ if done before someone else figures it out, and then it makes sense to protect the idea so you can make use of it first. But that's an extremely narrow domain.)
I don't think it's about ideas or even the code. It's about execution, marketing, talking to your customers and doing sales. This is something AI can't do...yet
I'm glad I am no longer in tech because I just don't want to do this.
This is not a dig at AI. If I take this article at face value, AI makes people more productive, assuming they have the taste and knowledge to steer their agents properly. And that's possibly a good thing even though it might have temporary negative side effects for the economy.
>But the AI is writing the traversal logic, the hashing layers, the watcher loops,
But unfortunately that's the stuff I like doing. And also I like communing with the computer: I don't want to delegate that to an agent (of course, like many engineers I put more and more layers between me and the computer, going from assembly to C to Java to Scala, but this seems like a bigger leap).
I'm a developer who was made redundant, and I'm now casting around for an entirely new job because, likewise, I have no interest in working with AI. It sounds boring, and the concept squicks me out, to be honest.
Out of interest what kind of fields are you looking at?
I expect there are going to be a bunch of people in similar situations to you over the next few years, I'm interested to know where they end up.
3 replies →
Might I ask how you make a living now?
I wish I moved to HCOL earlier so I could have saved enough fast enough to be you. I thought it would take more time before the end...
Well, at least you will have lots of company (me included).
I work in tech, and I think the worst part is seeing all the pieces of catastrophe that have had to come together to make AI dominate
There's several factors which are super depressing:
1. Economic productivity, and what it means for a company to be successful have become detached from producing good high quality products. The stock market is the endgame now
2. AI is attempting to strongly reject the notion that developers understanding their code is good. This is objectively wrong, but its an intangible skill that makes developers hard to replace, which is why management is so desperate for it
3. Developers had too much individual power, and AI feels like a modern attempt at busting the power of the workforce rather than a genuine attempt at a productivity increase
4. It has always been possible to trade long term productivity for short term gains. Being a senior developer means understanding this tradeoff, and resisting management pressure to push something out NOW that will screw you over later
5. The only way AI saves time in the long term is if you don't review its output to understand it as well as if you'd written it yourself. Understanding the code, and the large scale architecture is critical. Its a negative time savings if you want to write high-long-term-productivity code, because we've introduced an extra step
6. Many developers simply do not care about writing good code unfortunately, you just crank out any ol' crap. As long as you don't get fired, you're doing your job well enough. Who cares about making a product anymore, it doesn't matter. AI lets you do a bad job with much less effort than before
7. None of this is working. AI is not causing projects to get pushed out faster. There are no good high quality AI projects. The quality of code is going down, not up. Open source software is getting screwed
Its an extension of the culture where performance doesn't matter. Windows is all made of react components which are each individually a web browser, because the quality of the end product no longer matters anymore. Software just becomes shittier, because none of these companies actually care about their products. AAA gaming is a good example of this, as is windows, discord, anything google makes, IBM, Intel, AMD's software etc
A lot of this is a US problem, because of the economic conditions over there and the prevalence of insane venture capitalism and union busting. I have a feeling that as the EU gets more independent and starts to become a software competitor, the US tech market is going to absolutely implode
> I'm glad I am no longer in tech because I just don't want to do this.
This like how my grandpa said he was glad to get out of engineering before they started using computers.
The technology i used was the fun technology. The technology you use is the un-fun technology.
Right now I'm working two AI-jobs. I build agents for enterprises and I teach agent development at a university. So I'm probably too deep to see straight.
But I think the future of programming is english.
Agent frameworks are converging on a small set of core concepts: prompts, tools, RAG, agent-as-tool, agent handoff, and state/runcontext (an LLM-invisible KV store for sharing state across tools, sub-agents, and prompt templates).
These primitives, by themselves, can cover most low-UX application business use cases. And once your tooling can be one-shotted by a coding agent, you stop writing code entirely. The job becomes naming, describing, and instructing and then wiring those pieces together with something more akin to flow-chart programming.
So I think for most application development, the kind where you're solving a specific business problem, code stops being the relevant abstraction. Even Claude Code will feel too low-level for the median developer.
The next IDE looks like Google Docs.
You think prompting is here to stay? Sql has survived a long period of time. Servlets haven’t. We moved from assembly to higher languages. Flash couldn’t make it. So, im not sure for how long we will be prompting. Sure it looks great right now (just like Flash, servlets and assembly looked back then) but I think another technology will emerge that perhaps is based on promps behind the curtains but doesn’t look like the current prompting.
I would say prompting is not here to stay. It’s just temporary “tech”
> The job becomes naming, describing, and instructing and then wiring those pieces together with something more akin to flow-chart programming.
That's precisely what peoples are bad at. If people don't grasp (even intuitively) the concept of finite state machine and the difference between states and logic, LLMs are more like a wishing well (vibes) than a code generator (tooling for engineering).
Then there's the matter of technical knowledge. Software is layers of abstraction and there's already abstraction beneath. Not knowing those will limit your problem solving capabilities.
Can you share a link to your agent class or another one you think is good?
Strangely we never hear gushing pieces on how great gcc is. If you have to advertise that much or recruit people with AI mania, perhaps your product isn't that great.
Maybe when they've also been doing their thing for almost 40 years, people will be past this phase for LLMs, too ;-)
You must be new to Hacker News. There have been plenty of pieces praising the GCC toolchain.
> But guided? The models can write better code than most developers. That’s the part people don’t want to sit with. When guided.
Where do you draw the line between just enough guidance vs too much hand holding to an agent? At some point, wouldn't it be better to just do it yourself and be done with the project (while also build your muscle memory, experiences and the mental model for future projects, just like tons of regular devs have done in the past)
The line is scope.
I'm not asking an agent to build me a full-stack app. That's where you end up babysitting it like a kindergartener and honestly you'd be faster doing it yourself. The way I use agents is focused, context-driven, one small task at a time.
For example: i need a function that takes a dependency graph, topologically sorts it, and returns the affected nodes when a given node changes. That's well-scoped. The agent writes it, I review it, done.
But say I'm debugging a connection pool leak in Postgres where connections aren't being released back under load because a transaction is left open inside a retry loop. I'm not handing that to an agent. I already know our system. I know which service is misbehaving, I know the ORM layer, I know where the connection lifecycle is managed. The context needed to guide the agent properly would take longer to write than just opening the code and tracing it myself.
That's the line. If the context you'd need to provide is larger than the task itself, just do it. If the task is well-defined and the output is easy to verify, let the agent rip.
The muscle memory point is real though. i still hand-write code when I'm learning something new or exploring a space I don't understand yet. AI is terrible for building intuition in unfamiliar territory because you can't evaluate output you don't understand. But for mundane scaffolding, boilerplate, things that repeat? I don't. llife's too short to hand-write your 50th REST handler.
I don't agree with the headline of "we're all AI engineers now", but I do agree that AI is more of a multiplier than anything. If you know what you're doing, you go faster, if you don't, you're just making a mess at a record pace.
I'm not sure how this sustains though; like, I can't help but think this technology is going to dull a lot of people's skills, and other people just aren't going to develop skills in the first place. I have a feeling a couple years from now this is going to be a disaster (I don't think AGI is going to take place and I think the tools are going to get a lot more expensive when they start charging the true costs)
[dead]
The issue is that you become lazy after a while and stop “leading the design”. And I think that’s ok because most of the code is just throwaway code. You would rewrite your project/app several times by the time it’s worth it to pay attention to “proper” architecture. I wish I had these AIs 10 years ago so that I could focus on everything I wanted to build instead to become a framework developer/engineer.
I agree. I've got more lazy over time too. But the cost of creating code is so cheap... it's now less important to be perfect the first time the code hits prod (application dependant). It can be rewritten from scratch in no time. The bar for 'maintainability' is a lot lower now, because the AI has more capacity and persistence to maintain terrible code.
I'm sure plenty of people disagree with me. But I'm a good hand programmer, and I just don't feel the need to do that any more. I got into this to build things for other people, and AI is letting me do that more efficiently. Yes, I've had to give up a puritan approach to code quality.
> I wish I had these AIs 10 years ago so that I could focus on everything I wanted to build instead to become a framework developer/engineer.
I think frameworks (especially those that have testing built-in) are even more important as guardrails now.
I've been programming for literally my entire life. I love it, it's part of me, and there hasn't been more than a week in 30 years that I haven't written some code.
This is the first time that I feel a level of anxiety when I am not actively doing it. What a crazy shift that I am still so excited and enamored by the process after all of this time.
But there's also the double edged sword. I am also having a really hard time moderating my working hours, which I naturally struggle with anyway, even more. Partly because I am having so much fun and being so productive. But also because it's just so tempting to add 1 more feature, fix one more bug.
>I think we all might be AI Engineers now, and I’m not sure how I feel about that.
Except the rest of the article strongly implies he feels pretty good about it, assuming you can properly supervise your agents.
The only way I see out of this crisis (yes I'm not on the token-using side of this) is strict liability for companies making software products (just like in the physical world). Then it doesn't matter if the token-generator spits out code or a software engineer spits out code - the company's incentives are aligned such that if something breaks it's on them to fix it and sort out any externalities caused. This will probably mean no vibe-coded side hustles but I personally am OK with that.
I think this is coming, alongside professional licensure for "software engineers". Every public-facing project will need someone to put a literal stamp of approval on the code, and regardless whether Claude or Codex wrote the bulk of it, it'll be that person's head on a pike when something goes wrong.
This isn't what many of us probably would have wanted, but I think the public blowback when "AI-coded" systems start failing is going to drive us there. (Note to passing hype-men: I did not say they will fail at higher rates than human-coded systems! I happen to believe this, but it is not germane to the argument - only the public perception matters here.)
So far the issue for me is that you can generate more crap by far than you can keep an eye on.
Once you have your 50k line program that does X are you really going to go in there and deeply review everything? I think you're going to end up taking more and more on trust until the point where you're hostage to the AI.
I think this is what happens to managers of course - becoming hostage to developers - but which is worse? I'm not sure.
> I can still reverse a binary tree without an LLM. I can still reason about time complexity, debug a race condition by reading the code, trace a memory leak by thinking.
All your incantations can't protect you
A 2026 AI Engineer is a 1996 Software Architect. I don't need to be the one manually implementing the individual widgets of a system, I can delegate their implementation to developers (agents).
I'm being a little facetious, but I don't think it's far off the mark from what TFA is saying, and it matches my experience over the past few months. The worst architects we ever worked with were the ones who couldn't actually implement anything from scratch. Like TFA says, if you've got the fundamentals down and you want to see how far you can go with these new tools, play the role of architect for a change and let the agents fly.
I’ve always designed systems along the classic path: requirements → use cases → schematization. With AI, I continue in the same spirit (structure precedes prompting), but now the foundational layer of my systems is axioms and constraints, and the architecture emerges through structured prompts. Any AI on the shift is an aide in building systems that are logically grounded. This is where the “all of us as AI engineers” claim becomes subtle. Yes, anyone can generate code, but real engineering remains about judgment and structure. AI amplifies throughput, but the bottleneck is still problem framing, abstraction choice, and trade-off reasoning.
Saw the edit: I think that clarification was important. The core point resonates with me personally. The shift isn't about writing less code, it's about where the real judgment lives. Knowing what to build, how to decompose a problem, which patterns to reach for - and critically, when the model is confidently wrong. Without that foundation you're not moving faster, you're just making bad decisions faster. The scope point resonates too. Small, well-defined tasks with verifiable output is where agents actually shine.
Without writing some code how will people really know what's right? I've supervised people before - one thinks one knows best and pontificates at them and then when one actually starts working in the codebase onself many issues become clear. If you never get your hands dirty your decisions will tend off towards badness.
The perception seems to be that AI is only causing security vulnerabilites (see: openclaw injection in npm (Clinejection)). But the article's optimistic tone much reflects my own, and if it were all bad, then nobody would be using AI. But it's mostly good, and with the benchmarks, it's a statistical fact that it helps more than it hurts. It's just math at a certain point.
> Building systems that supervise AI agents, training models, wiring up pipelines where the AI does the heavy lifting and I do the thinking. Honestly? I’m having more fun than ever.
I'm sure some people are having fun that way.
But I'm also sure some people don't like to play with systems that produce fuzzy outputs and break in unexpected moments, even though overall they are a net win. It's almost as if you're dealing with humans. Some people just prefer to sit in a room and think, and they now feel this is taken away from them.
Right. What about to K.I.S.S (Keep It Simple Stupid)? If I need a bunch of agents and various levels of orchestration to simply close a bunch of Jira tasks then we have a problem. Also, what happens in a few years when this start failing and human operators are no longer able to troubleshoot the issue, forget fixing it.
I'm just an old school programmer who loves writing code, and the recent AI developments have just taken the most fun part away from me.
I get this. I don't think either of you is wrong. There's a real loss in not writing something from scratch and feeling it come together under your hands. I'm not dismissing that.
I have immense respect for the senior engineers who came before me. They built the systems and the thinking that everything I do now sits on top of. I learned from people. Not from AI. The engineers who reviewed my terrible pull requests, the ones who sat with me and explained why my approach was wrong. That's irreplaceable. The article is about where I think things are going, not about what everyone should enjoy.
And "taking the fun out" is one thing. Making 50% or more of coders redandunt is a whole other can of worms.
fr, like in 2020 I started to learn programming in C/C++ at 9 and in 2023 when the AI bubble just went on, it feels like I did it all for nothing
Very much on the same page as the author, I think AI is a phenomenal accelerant.
If you're going in the right direction, acceleration is very useful. It rewards those who know what they're doing, certainly. What's maybe being left out is that, over a large enough distribution, it's going to accelerate people who are accidentally going in the right direction, too.
There's a baseline value in going fast.
>There's a baseline value in going fast.
Maybe to the people writing the invoices for the infra you're renting, sure. Or to the people who get paid to dig you out of the consequences you inevitably bring about. Remember, the faster the timescale, the worse we are wired to effectively handle it as human beings. We're playing with a fire that catches and spreads so fast, by the time anyone realizes the forest is catching and starting to react, the entire forest is already well on the way to joining in the blaze.
> We're playing with a fire that catches and spreads so fast, by the time anyone realizes the forest is catching and starting to react, the entire forest is already well on the way to joining in the blaze.
I suspect this has been said in one form or another since the discovery of fire itself.
Maybe I'm entirely out of the loop and a complete idiot, but I am really not sure at all what people mean when they talk about this stuff. I use AI agents every day, but people who say they spend 'most of my time writing agents and tools' must be living in an absolutely different world.
I don't understand how people are making anything that has any level of usefulness without a feedback loop with them at the center. My agents often can go off for a few minutes, maybe 10, and write some feature. Half of the time they will get it wrong, I realize I prompted wrong, and I will have to re-do it myself or re-do the prompt. A quarter of the time, they have no idea what they're doing, and I realize I can fix the issue that they're writing a thousand lines for with a single line change. The final quarter of the time I need to follow up and refine their solution either manually or through additional prompting.
That's also only a small portion of my time... The rest is curating data (which you've pretty much got to do manually), writing code by hand (gasp!), working on deployments, and discussing with actual people.
Maybe this is a limitation of the models, but I don't think so. To get to the vision in my head, there needs to be a feedback loop... Or are people just willing to abdicate that vision-making to the model? If you do that, how do you know you're solving the problem you actually want to?
No we can't, because the teams are being reduced in headcount to the few lucky ones allowed to wear the AI hat.
This essay somehow sounds worse than AI slop, like ChatGPT did a line of coke before writing this out.
I use AI everyday for coding. But if someone so obviously puts this little effort into their work that they put out into the world, I don’t think I trust them to do it properly when they’re writing code.
I wrote it myself. But the irony isn't lost on me. "Who did what" is kind of the whole point of the article. Appreciate the feedback.
FWIW I reported your post to the mods because it reads completely AI generated to me. My judgement was that it might have been slightly edited but is largely verbatim LLM output.
Some tells that you might wanna look at in your writing, if you truly did write it yourself without Any LLM input are these contrarian/pivoting statements. Your post is full of these and it is imo the most classic LLM writing tell atm. These are mostly variants of the 'Its not X but Y" theme:
- "Not whether they've adopted every tool, but whether they're curious"
- "I still drive the intuition. The agents just execute at a speed I never could alone."
- "The model doesn't save you from bad decisions. It just helps you make them faster."
- "That foundation isn't decoration. It's the reason the AI is useful to me in the first place."
- "That's not prompting. That's engineering"
It is also telling that the reader basically cant take a breather most of the sentences try to emphasize harder than the last one. There is no fluff thought, no getting side tracked. It reads unnatural, humans do not think like this usually.
1 reply →
FWIW I thought it read fine and enjoyed the take. As I'm exploring more AI tooling I'm asking myself some of the same questions.
Yours is maybe the first good post on managing a team of AIs that I've read. There is no spoon.
I've been shifting from being the know-it-all coder who fixes all of the problems to a middle manager of AIs over the past few months. I'm realizing that most of what I've been doing for the last 25 years of my career has largely been a waste of time, due to how the web went from being an academic pursuit to a profit-driven one. We stopped caring about how the sausage was made, and just rewarded profit under a results-driven economic model. And those results have been self-evidently disastrous for anyone who cares about process or leverage IMHO. So I ended up being a custodian solving other people's mistakes which I would never make, rather than architecting elegant greenfield solutions.
For example, we went from HTML being a declarative markup language to something imperative. Now rather than designing websites like we were writing them in Microsoft Word and exporting them to HTML, we write C-like code directly in the build product and pretend that's as easy as WYSIWYG. We have React where we once had content management systems (CMSs). We have service-oriented architectures rather than solving scalability issues at the runtime level. I could go.. forever. And I have in countless comments on HN.
None of that matters now, because AI handles the implementation details. Now it's about executive function to orchestrate the work. An area I'm finding that I'm exceptionally weak in, due to a lifetime of skirting burnout as I endlessly put out fires without the option to rest.
So I think the challenge now is to unlearn everything we've learned. Somehow, we must remember why we started down this road in the first place. I'm hopeful that AI will facilitate that.
Anyway, I'm sure there was a point I was making somewhere in this, but I forgot what it was. So this is more of a "you're not alone in this" comment I guess.
Edit: I remembered my point. For kids these days immersed in this tech matrix we let consume our psyche, it's hard to realize that other paradigms exist. Much easier to label thinking outside the box as slop. In the age of tweets, I mean x's or whatever the heck they are now, long-form writing looks sus! Man I feel old.
Yeah, I came here to ask if you're Vibe Writing as well ;)
I wasn't quite sure though. Sometimes it's clearly GPT, sometimes clearly Claude, and this article was like a blend.
“Hey AI, clone yourself”
We’re getting there..
I’ve kind of done this. To an extent.
“Hey Claude, you have a bunch of skills defined, some mcps, and memory filled with useful stuff. I want to use you on a machine accessible over SSH at <host>, can you clone yourself over?”
> The problem is: you can’t justify this throughput to someone who doesn’t understand real software engineering. They see the output and think “well the AI did it.” No. The AI executed it. I designed it. I knew what to ask for, how to decompose the problem, what patterns to use, when the model was going off track, and how to correct it. That’s not prompting. That’s engineering.
That’s the “money quote,” for me. Often, I’m the one that causes the problem, because of errors in prompting. Sometimes, the AI catches it, sometimes, it goes into the ditch, and I need to call for a tow.
The big deal, is that I can considerably “up my game,” and get a lot done, alone. The velocity is kind of jaw-dropping.
I’m not [yet] at the level of the author, and tend to follow a more “synchronous” path, but I’m seeing similar results (and enjoying myself).
There are two types of engineers who use AI:
- Ones who see it generated something bad, and blame the AI.
- Ones who see it generated something bad, and revert it and try to prompt better, with more clarity and guidance.
- Ones who see it generated something bad, and realise it'd be faster to just hand fix the issues than babysit an LLM
2 replies →
Three types:
- Ones that use it as a “pair partner,” as opposed to an employee.
Thanks for the implicit insult. That was helpful.
I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization.
That would've taken me 3 months a year ago, just to learn the syntax and evaluate competing options. Now I can get sccache working in a day, find it doesn't scale well, and replace it with recc + buildbarn. And ask the AI questions like whether we should be sharding the CAS storage.
The downside is the AI is always pushing me towards half-assed solutions that didn't solve the problem. Like just setting up distributed caching instead of compilation. It also keeps lying which requires me to redirect & audit its work. But I'm also learning much more than I ever could without AI.
I hope we get a follow-up in six months or a year as to how this all went.
> that would've taken me 3 months a year ago, just to learn the syntax
This is hyperbole, right? In what world does it take 3 months to learn the syntax to anything? 3 days is more than enough time.
You perhaps just introduced one more moving part, that you don't understand well. Instead of thinking of a simpler solution.
> I vibe coded a Kubernetes cluster in 2 days for a distributed compilation setup. I've never touched half this stuff before. Now I have a proof of concept that'll change my whole organization.
Dunning-Kruger as a service. Thank God software engineers are not in charge of building bridges.
Looking forward to your post-mortem.
It sounds a bit no-true-scotsman to me.
I think he is absolutely right. But what if he is not right? Then he is also absolutely right. He is just always absolutely right right?. Even when he is not right? Yes he is always absolutely right.
I agree wholeheartedly with all that is said in this article. When guided, AI amplifies the productivity of experts immensely.
There are two problems left, though.
One is, laypersons don't understand the difference between "guided" and "vibe coded". This shouldn't matter, but it does, because in most organizations managers are laypersons who don't know anything about coding whatsoever, aren't interested by the topic at all, and think developers are interchangeable.
The other problem is, how do you develop those instincts when you're starting up, now that AI is a better junior coder than most junior coders? This is something one needs to think about hard as a society. We old farts are going to be fine, but we're eventually going to die (retire first, if we're lucky; then die).
What comes after? How do we produce experts in the age of AI?
The instincts can absolutely be developed faster with AI — if you set it up right. I work with an AI partner daily and one thing I've noticed is that it's a brutal mirror: it exposes gaps in your thinking immediately because it does exactly what you tell it, not what you meant.
That feedback loop, hundreds of times a day, compresses years of learning into months. The catch is you need guardrails — tests that fail when the AI drifts, review cycles you can't skip, architecture constraints it must respect.
That's what builds the instincts: not the AI doing the work for you, but the AI showing you where your understanding breaks down, fast enough that you actually learn from it. Just-In-Time Learning.
This is the question I keep coming back to. I don't have a clean answer yet.
The foundation I built came from years of writing bad code and understanding why it was bad. I look at code I wrote 10 years ago and it's genuinely terrible. But that's the point. It took time, feedback, reading books, reviewing other people's work, failing, and slowly building the instinct for what good looks like. That process can't be skipped.
If AI shortens the path to output, educators have to double down on the fundamentals. Data structures, systems thinking, understanding why things break. Not because everyone needs to hand-write a linked list forever, but because without that foundation you can't tell when the AI is wrong. You can't course-correct what you don't understand.
Anyone can break into tech. That's a good thing. But if someone becomes a purely vibe-coding engineer with no depth, that's not on them. That's on the companies and institutions that didn't evaluate for the right things. We studied these fundamentals for a reason. That reason didn't go away just because the tools got better.
I think the problem is overstated.
People always learn the things they need to learn.
Were people clutching their pearls about how programmers were going to lack the fundamentals of assembly language after compilers came along? Probably, but it turned out fine.
People who need to program in assembly language still do. People who need to touch low-level things probably understand some of it but not as deeply. Most of us never need to worry about it.
I don't think the comparison (that's often made) between AI and compilers is valid though.
A compiler is deterministic. It's a function; it transforms input into output and validates it in the process. If the input is incorrect it simply throws an error.
AI doesn't validate anything, and transforms a vague input into a vague output, in a non-deterministic way.
A compiler can be declared bug-free, at least in theory.
But it doesn't mean anything to say that the chain 'prompt-LLM-code' is or isn't "correct". It's undecidable.
2 replies →
>People always learn the things they need to learn.
No, they don't. Which why a huge % of people are functionaly illiterate at the moment, know nothing about finance and statistics and such and are making horrendous decisions for their future and their bottom line, and so on.
There is also such a thing as technical knowledge loss between generations.
Finally a take that I can agree with.
I would think an AI engineer is one who is, you know, engineering AI.
We might all be AI users now, though.
I find really sad how people are so stubborn to dismiss AI as a slop generator. I completely agree with the author, once you spend the time building a good enough harness oh boy you start getting those sweet gains, but it takes a lot of time and effort but is absolutely worth it.
Personally, I dismiss AI, mainly agenetic ones, because of its environmental impact. I hope that one day everyone will be held accountable for it.
[dead]
[dead]
[dead]
[dead]
what about the environmental impact of AI, especially agentic AI? I keep reading praise for AI on the orange site, but its environmental impact is rarely discussed. It seems that everyone has already adopted this technology, which is destroying our world a little more.
Environment impact is overstated. If you've ever looked at the numbers vs your daily carbon impact, you'd realize this.
I believe the orange site's consensus was that it's approximately one additional mini fridge or dish washer worth of consumption on average. You've got users who use these tools barely 1k tokens per week. Assuming it's all batched ideally that's like running an LED floodlight for a minute or so. The other end of the spectrum can be pretty extreme in consumption but it's also rare. Most people just use the adhoc stuff.
The environmental impact of AI replacing a human programmer is orders of magnitude lower than the environmental impact of that programmer. Look up average US water consumption and CO2 emissions per capita.
And then add on top the environmental impact of all of the money that programmer gets from programming - travels around the world, buying large houses, ...
If you care about the environment, you should want AI's replacing humans at most jobs so that they can no longer afford traveling around the world and buying extravagant stuff.
Yes the environmental impact of an AI agent performing a given task is lower. However we will not simply replace every programmer with an agent: in the process we will use more agents exceeding the previous environmental impact of humans. This is the rebound effect [0].
Your reasoning could be effective if we bounded the computing resources usable by all AI in order to meet carbon reduction goals.
[0] https://en.wikipedia.org/wiki/Rebound_effect_(conservation)
>The environmental impact of AI replacing a human programmer is orders of magnitude lower than the environmental impact of that programmer. Look up average US water consumption and CO2 emissions per capita.
The programmer will continue to exist as a consumer of those things even if they get replaced by AI in their job.
1 reply →
So you mean that human programmers who were replaced by AI are dead by now?
"You'll be fine digging trenches, programmer", they said.
Seriously, though:
...so that they can no longer afford traveling around the world...
This is either a sarcasm I failed to parse, or pure technofascism.
1 reply →
this is genocidal, on a human-wide scale.
All environmental impacts are equal, but some of them are more equal than the others!
This comes from a dystopian book (Animal Farm). What is your point?
1 reply →
The phrase "shape up or ship out" is an apt one I've heard. Agentic AI is a core part of software engineering. Either you are learning and using these tools, or you're not a professional and don't belong in the field.
Seems strange, for decades we allowed developers to use what made them comfortable, you like notepad? go ahead and use it. Don't want an LSP? that's fine disable it.
So long as their productivity was on par with the rest of the team there was no issue.
Suddenly, everyone needs to use this new tool (which we haven't proven to actually be effective) and if you don't you don't belong in the industry.
> So long as their productivity was on par with the rest of the team there was no issue.
Emphasis added. And anyway, for most software dev in most shops it wasn't true; most development takes place in whatever IDE the group/organization standardized on for the task, to make sure everyone gets proper tooling and to make collaboration and information sharing easier. Think of all the Java enterprise software developed by legions of drones in the 2000s and 2010s. They all used Eclipse, because Eclipse is what they were given.
It's only with the emergence of whiny, persnickety Unix devs who refused to leave the comforting embrace of their editor of choice that shops in the internet/dotcom/startup tradition embraced a "use whatever tools you want" philosophy. They had uncharacteristically enormous leverage over the tech stack being deployed in such businesses and could force employers to make that concession. And anyway, what some of them could do with vi blew the boss's mind.
It is true that we don't have a whole lot of hard data from large organizations that show AI productivity improvements. But absence of evidence is not evidence of absence. Turns out, most large organizations just haven't adopted AI in the amount and ways that could make a big impact.
But we have enough anecdata from competent developers to suggest that the productivity gains are huge. So big, AI not only lets you do your normal tasks many times faster, it puts projects within reach that you would not have countenanced before because they were too complex or tedious to be worth the payoff.
So no. Refusing to use AI is just pure bloodymindedness at this point—like insisting on using a keypunch while everyone around you discovers the virtues of CRT terminals and timesharing. There were people like this even in the 1970s when IBM finally came around and made timesharing available in their mainframes. Those people either got up to speed or moved on to a different profession. They couldn't keep working the way they'd been working because the productivity expectations changed with the availability of new technology.
1 reply →
What an incredibly stupid, tasteless, reductionist opinion. Go log off for a while and reevaluate your life.
[flagged]