Comment by _dwt

13 hours ago

I am going to try to put this kindly: it is very glib, and people will find it offensive and obnoxious, to implicitly round off all resistance or skepticism to incuriosity. Perhaps to alienate AI critics even further is the goal, in which case - carry on.

But if you are genuinely confused by the attitudes of your peers, try asking not "what do I have that they lack" ("curiosity"?) but "what do they see that I don't" or "what do they care about that I don't"? Is it possible that they are not enthusiastic for the change in the nature of the work? Is it possible they are concerned about "automation complacency" setting in, precisely _because_ of the ratio of "hundreds of times" writing decent code to the one time writing "something stupid", and fear that every once in a while that "something stupid" will slip past them in a way that wipes the entire net gain of AI use? Is it possible that they _don't_ feel that the typical code is "better than most engineers can write"? Is it possible they feel that the "learning" is mostly ephemera - how much "prompt engineering" advice from a year ago still holds today?

You have a choice, and it's easy to label them (us?) as Luddites clinging to the old ways out of fear, stupidity, or "incuriosity". If you really want to understand, or even change some minds, though, please try to ask these people what they're really thinking, and listen.

My feeling is that the code it generates is locally ok, but globally kind of bad. What I mean is, in a diff it looks ok. But when you start comparing it to the surrounding code, there's a pretty big lack of coherency and it'll happily march down a very bad architectural path.

In fairness, this is true of many human developers too.. but they're generally not doing it at a 1000 miles per hour and they theoretically get better at working with your codebase and learn. LLMs will always get worse as your codebase grows, and I just watched a video about how AGENTS.md actually usually results in worse outcomes so it's not like you can just start treating MD files as memory and hope it works out.

  • > But when you start comparing it to the surrounding code, there's a pretty big lack of coherency and it'll happily march down a very bad architectural path.

    I had an idea earlier this week about this, but haven’t had a chance to try it. Since the agent can now “see” the whole stack, or at least most of it, by having access to the repos, there’s becoming less of a reason to suspect they won’t be able to take the whole stack into account when proposing a change.

    The idea is that it’s like grep: you can call grep by itself, but when a match is found you only see one line per match, not any surrounding context. But that’s what the -A and -B flags are for!

    So you could tell the agent that if its proposed solution lies at layer N of the system, it needs to consider at least layers N-1 (dependencies) and N+1 (consumers) to prevent the local optimum problem you mentioned.

    The model should avoid writing a pretty solution in the application layer that conceals and does not address a deeper issue below, and it should keep whatever contract it has with higher-level consumers in good standing.

    Anyway, I haven’t tried that yet, but hope to next week. Maybe someone else has done something similar and (in)validated it, not sure!

I don't think that people who don't want to use these tools or clean old ways are incurious. But I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point. Not in the future, but now. It's just that the adoption of these tools isn't evenly distributed yet.

I think there's a place for thoughtful dialogue around what this means for software engineering, but I don't think that's going to change anything at this point. If developers just don't want to participate in this new world, for whatever reason, I'm not judging them, but also I don't think the genie is going back in the bottle. There will be no movement to organize labor to protect us and there be no deus ex machina that is going to reverse course on this stuff.

  • > I think these developers should face the fact that those skills and those ways they are reticent to give up are more or less obviated at this point.

    Yes. We are this generations highly skilled artisans, facing our own industrial revolution.

    Just as the skilled textile workers and weavers of early 19’th century Britain were correct when they argued this new automated product was vastly inferior, it matters not at all. And just as they were also correct, that the government of the day was doing nothing to protect the lives and livelihoods of those who had spent decades mastering a difficult set of professional skills (the middle class of the day), the government of this day will also do nothing.

    And it doesn’t end with “IT”; anything that can be turned into a factory process with our new “thinking engines” will be. Perhaps we can do better in society this time around. I am not hopeful.

  • I'm using Claude every day, and it definitely makes me faster but.. I'm also able to give it a lot of very specific instructions and correct a lot of mistakes quickly because I look at the code and understand what it's doing; and I'm also asking it to write code in domains I understand. So I don't think these skills are obsolete at all. If anything, keeping them sharp is the only differentiator we have. "Agentic Engineering" is as much as joke as "Vibe Coding" is in my mind. The tools are powerful, but they don't make up for knowing how to code, and if you're just blindly trusting it it's going to end badly.

    • >I'm using Claude every day, and it definitely makes me faster but..

      I see a lot of posts about this, and I see a lot studies, also on HN, that show that this isn't the case.

      Now of the course the "this isn't the case" stuff is statistically, thus there can be individual developers whom are faster, but there can also be that an individual developer sometimes is faster and sometimes not but the times that they are faster are just so clearly faster that it sort of hides the times that they're not. Statistics of performance over a number of developers can flatten things out. But I don't know that is the case.

      So my question for you, and everyone that claims it makes them so perceptively and clearly faster - how do you know? Given all the studies showing that it doesn't make you faster, how are you so sure it does?

      8 replies →

  • Well, no, not with that attitude there won’t! I am not trying to insinuate that there is a conspiracy, or that posts like yours are part of it, but there has been a huge wave of posts and comments since February which narrow the Overton window to the distance between “it’s here and it’s great” and “I’m sad but it’s inevitable”.

    Humanity has possessed nuclear weapons for 80 years and has used them exactly twice in anger, at the very beginning of that span. We can in fact just NOT do things! Not every world-beating technology takes off, for one reason or another. Supersonic airliners. Eugenics. Betamax.

    The best time to air concerns was yesterday. The next best time is today. I think we technologists wildly overestimate public understand and underestimate public distrust of our work and of “AI” specifically. We’ve got CEOs stating that LLMs are a bigger deal than nuclear weapons or fire(!) and yet getting upset that the government wants control of their use. We’ve got giddy thinkpieces from people (real example from LinkedIn!) who believe we’ll hit 100% white collar unemployment in 5 years and wrap up by saying they’re “5% nervous and 95% excited”. If that’s what they really think, and how they really feel, it’s psychopathic! Those numbers get you a social scene that’ll make the French Revolution look like a tea party. (“And honestly? I’m here for it.”)

    So no, while I _think_ you’re correct, I don’t accept the inevitability of it all. There are possibilities I don’t want to see closed off (maybe data finally really is the new oil, and that’s the basis for a planetary sovereign wealth fund. Maybe every man, woman, and child who ever wrote a book or a program or an internet comment deserves a royalty check in the mail each month!) just yet.

    • > We can in fact just NOT do things!

      I agree with you on that. Not just on AI but a lot of things that suck about this world, and in particular the United States. But capital is too powerful. And these tools are legitimately transformative for business. And business pays our bills and, more importantly, provides the healthcare insurance for our families. The wheel is a real fucking drag isn't it?

      I don't see anything short of a larger revolution against capital stopping or even stemming this. For that to really happen we would need a lot more people and interests than just those of software practitioners. Which may come yet when trucking jobs collapse and customer service jobs disappear. I don't know. I do know that I'm taking part in something that will potentially (likely?) seed the end of my career as I know it but it's just one of many contradictions that I live with. In the meantime the tools are impressive and I'm just figuring out how to live with them and do good work with them and as you can probably tell, I'm pretty convinced that's the best we can make of the situation right now.

    • > We can in fact just NOT do things!

      100% this. I don't know why we think that pouring trillions of dollars into something we barely understand to create an economic revolution that is almost certainly awful is at all "inevitable". We just need leaders that aren't complete idiots. I'm generally cynical, but I do see that normies (ie not in tech) are waking up a bit. I don't think the technology is inherently a bad thing, but the people that think that we should just do this as fast as possible to win "the race" should be shot into space as far as I'm concerned. To start with, we need a working SEC that can actually punish the grifting CEO's that are using fear to manipulate markets.

  • I'm still going to need at least one of my vendors to speed up their release pace before I'll believe that. I'm seeing a ton of churn and no actual new product.

  • A new technology comes out — admittedly one that’s extraordinarily capable at some things — and suddenly conventional software engineering is “more or less obviated at this point”? I’m sorry, but that’s really fucking dumb. Do you think LLMs are actually intelligent? Do you think their capabilities exceed the quality of their training corpus? Is there no longer any need to think about new software paradigms, build new frameworks, study computer science, because the regurgitated statistical version of programming is entirely good enough? After all, what’s code but a bunch of boring glue and other crap that’s used to prop up a product idea until a few bucks can be extracted from it?

    Of course, there’s nothing wiser than tying the entirety of your career to a $20/month subscription (that will jump 10x in price as soon as the market is captured).

    Is writing solved because LLMs can make something decently readable? Why say anything at all when LLMs can glob your ideas into a glitzy article in a couple of seconds?

    I swear, some people in this field see no value in their programming work — like they’ve been dying to be product managers their entire lives. It is honestly baffling to me. All I see is a future full of horrifying security holes, heisenbugs, and performance regressions that absolutely no one understands. The Idiocracy of software. Fuck!

    • > Is there no longer any need to think about new software paradigms, build new frameworks, study computer science, because the regurgitated statistical version of programming is entirely good enough?

      All I'm saying is you're gonna have to figure out how to do this with an agent. It's not that I don't see value in the craft; it's just that value is less important. As far as the new paradigms, the new frameworks, new studies in computer science -- they still exist, it's just that they are going to focus on how to mitigate heisenbugs, performance regressions and security holes in agent written code. Who knows.. in five years most of the code written may not even be readable. I'm not saying it's going to be like that, but it's entirely possible.

      In the meantime, there's nothing stopping you from using the agent to write the code that is every bit as high quality as if you sat down and typed it in yourself. And right now there is a category of engineers that exclusively use agents to create quality software and they are more efficient at it than anybody that just does it themselves. And that category is growing and growing every day.

      I may be out a job in five years because all of this. But I am seeing where this is going and it's clear and so I'm going to have to change with it.

      5 replies →

    • Lol. Im a CEO and Ive re-vamped my hiring process that has nothing to do with writing code.

      I test to see the way people think now. People like you would pass my interview.

It's important to point out that you're the one working hard to define AI critics as a camp/group/class when a stronger argument can be made that we're all in the same camp/group/class. I use agentic LLMs for coding every day and I think that it's incredibly important to maintain a critical lens and be open to changing our minds.

However, history suggests that creating artificial divisions is the first step towards all of the the bad things we claim not to like in this world.

Tech adoption generally moves like Time's Arrow. People who use LLMs aren't geeks who changed; we're just geeks. If you want to get off the train, that's your call. But don't make it an us vs them.

Underlying this and similar arguments is the presumption that the "old way" was perfect. You or your colleagues weren't doing one mistake per 100 successful commits. I have been in an industry for decades, and I can tell you that I do something stupid when writing code manually quite often. The same goes for the people that I work with. So fear that the LLM will make mistakes can't really be the reason. Or if it is the reason, it isn't a reasonable objection.

I read the parent comment as calling the majority of AI users "incurious", and not referring to us who resist AI for whatever reasons. The curious AI users can obtain self-improvement, the incurious ones want money or at least custom software without caring how its made.

I don't want the means of production to be located inside companies that can only exist with a steady bubble of VC dollars. It's perfectly reasonable to try AI or use it sparingly, but not embrace it for reasons that can be articulated. Not relevant to parent commenters point, though. Maybe you are "replying" to the article?

Time and time again that I observe it is the AI skeptic that is not reacting with curiosity. This is almost fundamentally true, as in order to understand a new technology you need to be curious about it; AI will naturally draw people who are curious, because you have to be curious to learn something new.

When I engage with AI skeptics and I "ask these people what they're really thinking, and listen" they say something totally absurd, like GPT 3.5-turbo and Opus 4.6 are interchangeable, or they put into question my ability as an engineer, or that I am a "liar" for claiming that an agent can work for an hour unprompted (something I do virtually every day). This isn't even me picking the worst of it, this is pretty much a typical conversation I have on HN, and you can go through my comment history to verify I have not drawn any hyperbole.

  • AI will naturally draw people who are lazy and not interested in learning.

    It's like flipping through a math book and nodding to yourself when you look at the answers and thinking you're learning. But really you aren't because the real learning requires actually doing it and solving and struggling through the problems yourself.

    • This is just completely inaccurate. There is more to learn now than ever before, and I find myself spending more and more time teaching myself things that I never before would have been able to find time to understand.

      1 reply →

  • I'm sorry you've had that experience, and I agree there are a good share of "skeptics" who have latched on to anecdata or outdated experience or theorycrafting. I know it must feel like the goalposts are moving, too, when someone who was against AI on technical grounds last year has now discovered ethical qualms previously unevidenced. I spend a lot of time wondering if I've driven myself to my particular views exclusively out of motivated reasoning. (For what it's worth, I also think "motivated reasoning" is underrated - I am not obligated to kick my own ass out of obligation to "The Truth"!)

    That said, I _did_ read your comments history (only because you asked!) and - well, I don't know, you seem very reasonable, but I notice you're upset with people talking about "hallucinations" in code generation from Opus 4.6. Now, I have actually spent some time trying to understand these models (as tool or threat) and that means using them in realistic circumstances. I don't like the "H word" very much, because I am an orthodox Dijkstraist and I hold that anthropomorphizing computers and algorithms is always a mistake. But I will say that like you, I have found that in appropriate context (types, tests) I don't get calls to non-existent functions, etc. However, I have seen: incorrect descriptions of numerical algorithms or their parameters, gaslighting and "failed fix loops" due to missing a "copy the compiled artifact to the testing directory" step, and other things which I consider at least "hallucination-adjacent". I am personally much more concerned about "hallucinations" and bad assumptions smuggled in the explanations provided, choice of algorithms and modeling strategies, etc. because I deal with some fairly subtle domain-specific calculations and (mathematical) models. The should-be domain experts a) aren't always and b) tend to be "enthusiasts" who will implicitly trust the talking genius computer.

    For what it's worth, my personal concerns don't entirely overlap the questions I raised way above. I think there are a whole host of reasons people might be reluctant or skeptical, especially given the level of vitriol and FUD being thrown around and the fairly explicit push to automate jobs away. I have a lot of aesthetic objections to the entire LLM-generated corpus, but de gustibus...

    • Your response is definitely on the top 5% of reasonableness from AI skeptics, so I appreciate that :-)

      But, if you don't mind me going on a rant: the hallucinations thing. It kind of drives me nuts, because every day someone trots out hallucinations as some epic dunk that proves that AI will never be used in the real world or whatever. I totally hear you and think you are being a lot more reasonable than most (and thank you for that) -- you are saying that AI can get detail-oriented and fiddly math stuff wrong. But as I, my co-workers, and anyone who seriously uses AI in the industry all know, hallucinations are utterly irrelevant to our day-to-day.

      My point is that hallucinations are irrelevant because if you use AI seriously for a while you quickly learn what it hallucinates on and what it does not, you build your mental model, and then you spend all your time on the stuff it doesn't hallucinate on, and it adds a fantastic amount of value there, and you are happy, and you ignore the things it is bad at, because why would you use a tool on things it is bad at? Hearing people talk about hallucinations in 2026 sounds to me like someone saying "a hammer will never succeed - I used it to smack a few screws and it NEVER worked!" And then someone added Hammer-doesnt-work-itis to Wikipedia and it got a few citations in Arxiv now it's all people can say when they talk about hammers online, omfg.

      So when you say that I should spend more time asking "what do they see that I don't" - I feel quite confident I already know exactly what you see? You see that AI doesn't work in some domains. I quite agree with you that AI doesn't work in some domains. Why is this a surprise? Until 2023 it worked in no domains at all! There is no tool out there that works in every single domain.

      But when you see something new, the much more natural question than "what doesn't this work on" is "what does this work on". Because it does work in a lot of domains, and fabulously well at that. Continuously bringing up how it doesn't work in some domain, when everyone is talking about the domains it does work, is just a non-sequitur, like if someone were to hop into a conversation about Rust and talk about how it can't solve your taxes, or a conversation about CSS to say that it isn't turing complete.

you make it seem like ai hesitation is a misunderstood fringe position, but it's not. i don't think anyone is confused about why some people are uninterested in ai tooling, but we do think you're wrong and the defensive posturing lines in the sand come off as incredibly uncurious.

I simply have no need for these things. I am faster, smarter, and I understand more. I syntesize disparate concepts your SoT models could never dream of. Why should I waste the money? I have all that I need up in my brain.

When everyone forgets how to read, I'll be thriving. When everyone is neurotic from prompting-brain, I will be in my emacs, zen and unburdened.

I love that yall have them though, they are kinda fun to mess with. And as long as I can review and reject it, all yalls little generations are acceptable for now.

  > But if you are genuinely confused by the attitudes of your peers, try asking not "what do I have that they lack" ("curiosity"?) but "what do they see that I don't" or "what do they care about that I don't"?

I'd argue these are good questions to ask in general, about many topics. That it's an essential skill of an engineer to ask these types of questions.

There's two critical mistake that people often make: 1) thinking there's only one solution to any given problem, and 2) that were there an absolute optima, that they've converged into the optimal region. If you carefully look at many of the problems people routinely argue about you'll find that they often are working under different sets of assumptions. It doesn't matter if it's AI vs non-AI coding (or what mix), Vim vs Emacs vs VSCode, Windows vs Mac vs Linux, or even various political issues (no examples because we all know what will happen if I do, which only illustrates my point). There are no objective answers to these questions, and global optima only have the potential to exist when highly constraining the questions. The assumptions are understood by those you closely with, but that breaks down quickly.

If your objective is to seek truth you have to understand the other side. You have to understand their assumptions and measures. And just like everyone else, these are often not explicitly stated. They're "so obvious" that people might not even know how to explicitly state them!

But if the goal is not to find truth but instead find community, then don't follow this advice. Don't question anything. Just follow and stay in a safe bubble.

We can all talk but it gets confusing. Some people argue to lay out their case and let others attack, seeking truth, updating their views as weaknesses are found. Others are arguing to social signal and strengthen their own beliefs, changing is not an option. And some people argue just because they're addicted to arguing, for the thrill of "winning". Unfortunately these can often look the same, at least from the onset.

Personally, I think this all highlights a challenge with LLMs. One that only exasperates the problem of giving everyone access to all human knowledge. It's difficult you distinguish fact from fiction. I think it's only harder when you have something smooth talking and loves to use jargon. People do their own research all the time and come to wildly wrong conclusions. Not because they didn't try, not because they didn't do hard work, and not because they're specifically dumb; but because it's actually difficult to find truth. It's why you have PhD level domain experts disagree on things in their shared domain. That's usually more nuanced, but that's also at a very high level of expertise.