Comment by davnicwil
1 month ago
In the end, I think the dream underneath this dream is about being able to manifest things into reality without having to get into the details.
The details are what stops it from working in every form it's been tried.
You cannot escape the details. You must engage with them and solve them directly, meticulously. It's messy, it's extremely complicated and it's just plain hard.
There is no level of abstraction that saves you from this, because the last level is simply things happening in the world in the way you want them to, and it's really really complicated to engineer that to happen.
I think this is evident by looking at the extreme case. There are plenty of companies with software engineers who truly can turn instructions articulated in plain language into software. But you see lots of these not being successful for the simple reason that those providing the instructions are not sufficiently engaged with the detail, or have the detail wrong. Conversely, for the most successful companies the opposite is true.
This rings true and reminds me of the classic blog post “Reality Has A Surprising Amount Of Detail”[0] that occasionally gets reposted here.
Going back and forth on the detail in requirements and mapping it to the details of technical implementation (and then dealing with the endless emergent details of actually running the thing in production on real hardware on the real internet with real messy users actually using it) is 90% of what’s hard about professional software engineering.
It’s also what separates professional engineering from things like the toy leetcode problems on a whiteboard that many of us love to hate. Those are hard in a different way, but LLMs can do them on their own better than humans now. Not so for the other stuff.
[0] http://johnsalvatier.org/blog/2017/reality-has-a-surprising-...
Every time we make progress complexity increases and it becomes more difficult to make progress. I'm not sure why this is surprising to many. We always do things to "good enough", not to perfection. Not that perfection even exists... "Good enough" means we tabled some things and triaged, addressing the most important things. But now to improve those little things now need to be addressed.
This repeats over and over. There are no big problems, there are only a bunch of little problems that accumulate. As engineers, scientists, researchers, etc our literal job is to break down problems into many smaller problems and then solve them one at a time. And again, we only solve them to the good enough level, as perfection doesn't exist. The problems we solve never were a single problem, but many many smaller ones.
I think the problem is we want to avoid depth. It's difficult! It's frustrating. It would be great if depth were never needed. But everything is simple until you actually have to deal with it.
> As engineers, scientists, researchers, etc our literal job is to break down problems into many smaller problems and then solve them one at a time.
Our literal job is also to look for and find patterns in these problems, so we can solve them as a more common problem, if possible, instead of solving them one at a time all the time.
1 reply →
I think we're all coping a bit here. This time, it really is different.
The fact is, one developer with Claude code can now do the work of at least two developers. If that developer doesn't have ADHD, maybe that number is even higher.
I don't think the amount of work to do increases. I think the number of developers or the salary of developers decreases.
In any case, we'll see this in salaries over the next year or two.
The very best move here might be to start working for yourself and delete the dependency on your employer. These models might enable more startups.
15 replies →
"(...) maybe growing vegetables or using a Haskell package for the first time, and being frustrated by how many annoying snags there were." Haha this is funny. Interesting reading.
While this is absolutely true and I've read this before, I don't think you can make this an open and shut case. Here's my perspective as an old guy.
The first thing that comes to mind when I see this as a counterargument is that I've quite successfully built enormous amounts of completely functional digital products without ever mastering any of the details that I figured I would have to master when I started creating my first programs in the late 80s or early 90s.
When I first started, it was a lot about procedural thinking, like BASIC goto X, looping, if-then statements, and that kind of thing. That seemed like an abstraction compared to just assembly code, which, if you were into video games, was what real video game people were doing. At the time, we weren't that many layers away from the ones and zeros.
It's been a long march since then. What I do now is still sort of shockingly "easy" to me sometimes when I think about that context. I remember being in a band and spending a few weeks trying to build a website that sold CDs via credit card, and trying to unravel how cgi-bin worked using a 300 page book I had bought and all that. Today a problem like that is so trivial as to be a joke.
Reality hasn't gotten any less detailed. I just don't have to deal with it any more.
Of course, the standards have gone up. And that's likely what's gonna happen here. The standards are going to go way up. You used to be able to make a living just launching a website to sell something on the internet that people weren't selling on the internet yet. Around 1999 or so I remember friend of mine built a website to sell stereo stuff. He would just go down to the store in New York, buy it, and mail it to whoever bought it. Made a killing for a while. It was ridiculously easy if you knew how to do it. But most people didn't know how to do it.
Now you can make a living pretty "easily" selling a SaaS service that connects one business process to another, or integrates some workflow. What's going to happen to those companies now is left as an exercise for the reader.
I don't think there's any question that there will still be people building software, making judgment calls, and grappling with all the complexity and detail. But the standards are going to be unrecognizable.
Is the surprising amount of detail an indicator that we do not live in a simulation, or is it instead that we have to be living inside a simulation because it doesn't need all this detail for Reality, indicating an algorithmic function run amuck?
Reality is infintely analog and therefore digital will only ever be an approximation.
Can you give an example of an "other stuff"?
I once wrote software that had to manage the traffic coming into a major shipping terminal- OCR, gate arms, signage, cameras for inspecting chassis and containers, SIP audio comms, RFID readers, all of which needed to be reasoned about in a state machine, none of which were reliable. It required a lot of on the ground testing and observation and tweaking along with human interventions when things went wrong. I’d guess LLMs would have been good at subsets of that project, but the entire thing would still require a team of humans to build again today.
17 replies →
Counterpoint: perhaps it's not about escaping all the details, just the irrelevant ones, and the need to have them figured out up front. Making the process more iterative, an exploration of medium under supervision or assistance of domain expert, turns it more into a journey of creation and discovery, in which you learn what you need (and learn what you need to learn) just-in-time.
I see no reason why this wouldn't be achievable. Having lived most of my life in the land of details, country of software development, I'm acutely aware 90% of effort goes into giving precise answers to irrelevant questions. In almost all problems I've worked on, whether at tactical or strategic scale, there's either a single family of answers, or a broad class of different ones. However, no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters". Either way, I'm forced to pick and spell out a concrete answer myself, by hand. Fortunately, LLMs are slowly starting to help with that.
From my experience the issue really is, unfortunately, that it is impossible to tell if a particular detail is irrelevant until after you have analyzed and answered all of them.
In other words, it all looks easy in hindsight only.
I think the the most coveted ability of a skilled senior developer, is precisely this "uncanny" ability to predict beforehand if some particular detail is important or irrelevant. This ability can only be obtained through years of experience and hubris.
4 replies →
> no programming language supports the notion of "just do the usual" or "I don't care, pick whatever, we can revisit the topic once the choice matters"
Programming languages already take lots of decisions implicitly and explicitly on one’s behalf. But there are way more details of course, which are then handled by frameworks, libraries, etc. Surely at some point, one has to take a decision? Your underlying point is about avoiding boilerplate, and LLMs definitely help with that already - to a larger extent than cookie cutter repos, but none of them can solve IRL details that are found through rigorous understanding of the problem and exploration via user interviews, business challenges, etc.
But that's the hard part. You have to explore the details to determine if they need to be included or not.
You can't just know right off the back. Doing so contradicts the premise. You cannot determine if a detail isn't important unless you get detailed. If you only care about a few grains of sand in a bucket you still have to search through a bucket of sand for those few grains
Right. But that's where tight feedback loop comes into place. New AI developments enable that in at least two ways: offloading busywork and necessary but straightforward work (LLMs can already write and iterate orders of magnitude faster than people), and having a multi-domain expert on call to lean on.
The thing about important details is that what ultimately matters is getting them right eventually, not necessarily the first time around. The real cost limiting creative and engineering efforts isn't the one of making a bad choice, but that of undoing it. In software development, AI makes even large-scale rewrites orders of magnitude cheaper than they ever were before, which makes a lot more decisions easily undoable in practice, when before that used to be prohibitively costly. I see that as one major way towards enabling this kind of iterative, detail-light development.
1 reply →
Fully agree with this. Not all labor is equally worth doing.
It's a cliché that the first 90% of a software project takes 90% of the time and the last 10% also takes 90% of the time, but it's cliché because it's true. So we've managed to invent a giant plausibility engine that automates the 90% of the process people enjoy leaving just the 90% that people universally hate.
And since the developers who have to do the last 90 % were not involved in the first 90 %, they will have no clue how to do it.
So now the last 90% is the last 99%.
>So we've managed to invent a giant plausibility engine that automates the 90% of the process people enjoy leaving just the 90% that people universally hate.
OK, for me it is the last 10% that is of any interest whatsoever. And I think that has been the case with any developer I've ever worked with I consider to be a good developer.
OK the first 90% can have spots of enjoyment, like a nice gentle Sunday drive stopping off at Dairy Queen, but it's not normally what one would call "interesting".
Sorry, I don't buy it. I'm an ops guy, and devs who say they like the integration stage mean they like making ops play a guessing game and clean up the mess they left us.
I am an AI hater (atleast in some of its current context precisely used for this) and you have worded some things I like to say in a manner I hadn't thought of and I agree with all you said and appreciate what you said man!
Now, I do agree with you and this is why I feel like AI can be good at just prototyping or for internal use cases, want to try out something no idea, sure use it or I have a website which sucks and I can quickly spin up an alternative for person use case, go for it, maybe even publish it to web with open source.
Take feedback from people if they give any and run with it. So in essense, prototyping's pretty cool.
But whenever I wish to monetize or the idea of monetize, I feel like we can take some design ideas or experimentation and then just write them ourselves. My ideology is simple in that I don't want to pay for some service which was written by AI slop, I mean at that point, just share us the prompt.
So at this point, just rewrite the code and actually learn what you are talking about (like I will give an example, I recently prototyped some simple firecracker ssh thing using gliderlabs/ssh golang package, I don't know how the AI code works, its just I built for my own use case, but If I wish to ever (someday) try to monetize it in any sense, rest assured I will try to learn how gliderlabs/ssh works to its core and build it all by my hands)
TLDR: AI's good for prototyping but then once you got the idea/more ideas on top of it, try to rewrite it in your understanding because as others have said the AI code you won't understand and you would spend 99% time on that 1% which AI can't but at that point, why not just rewrite?
Also if you rewrite, I feel like most people will be chill then buying even Anti AI people. Like sure, use AI for prototypes but give me code which I can verify and you wrote/ you understand to its core with 100% pinning of this fact.
If you are really into software projects for sustainability, you are gonna anger a crowd for no reason & have nothing beneficial come out of it.
So I think kind of everybody knows this but still AI gets to production because sustainability isn't the concern.
This is the cause. sustainability just straight up isn't the concern.
if you have VC's which want you to add 100's of features or want you to use AI or have AI integration or something (something I don't think every company should or their creators should be interested in unless necessary) and those VC's are in it only for 3-5 years who might want to dump you or enshitten you short term for their own gains. I can see why sustainability stops being a concern and we get to where we are.
Or another group of people most interested are the startup entrepreneur hustle culture people who have a VC like culture as well where sustainability just doesn't matter
I do hope that I am not blanket naming these groups because sure some might be exceptions but I am just sharing how the incentives aren't aligned and how they would likely end up using AI 90% slop and that's what we end up seeing in evidence for most companies.
I do feel like we need to boost more companies who are in it for the long run/sustainable practices & people/indie businesses who are in it because they are passionate about some project (usually that happens when they face the problem themselves or curiosity in many cases), because we as consumers have an incentive stick as well. Hope some movement can spawn up which can capture this nuance because i am not anti AI completely but not exactly pro either
Yes! I love this framing and it’s spot on. The successful projects that I’ve been involved in someone either cares deeply and resolves the details in real time or we figured out the details before we started. I’ve seen it outside software as well, someone says “I want a new kitchen” but unless you know exactly where you want your outlets, counter depths, size of fridge, type of cabinets, location of lighting, etc. ad infinitum your project is going to balloon in time and cost and likely frustration.
Is your kitchen contractor an unthinking robot with no opinions or thoughts of their own that has never used a kitchen? Obviously if you want a specific cabinet to go in a specific place in the room, you're going to have to give the kitchen contractor specifics. But assuming your kitchen contractor isn't an utter moron, they can come up with something reasonable if they know it's supposed to be the kitchen. A sink, a stove, dishwasher, refrigerator. Plumbing and power for the above. Countertops, drawers, cabinets. If you're a control freak (which is your perogative, it's your kitchen after all), that's not going to work for you. Same too for generated code. If you absolutely must touch every line of code, code generation isn't going to suit you. If you just want a login screen with parameters you define, there are so many login pages the AI can crib from that nondeterminism isn't even a problem.
At least in case of the kitchen contractor, you can trust all the electrical equipment, plumbing etc. is going to be connected in such a way that disasters won't happen. And if it is not, at least you can sue the contractor.
The problem with LLMs is that it is not only the "irrelevant details" that are hallucinated. It is also "very relevant details" which either make the whole system inconsistent or full of security vulnerabilities.
1 reply →
You kitchen contractor will never cook in your kitchen. If you leave the decisions to them, you'll get something that's quick and easy to build, but it for sure won't have all the details that make a great kitchen. It will be average.
Which seems like an apt analogy for software. I see people all the time who build systems and they don't care about the details. The results are always mediocre.
6 replies →
Maybe they have a kitchen without dishwasher. So unless asked they won't include one. Or even make it possible to include one. Seems like a real possibility. Maybe eventually after building many kitchens they learn they should ask about that one.
A kitchen is a great metaphor. Details or doom.
“Writing is nature's way of letting you know how sloppy your thinking is.”
— Richard Guindon
This is certainly true of writing software.
That said, I am assuredly enjoying trying out artificial writing and research assistants.
> You cannot escape the details. You must engage with them and solve them directly, meticulously. It's messy, it's extremely complicated and it's just plain hard.
Of course you can. The way the manager ignores the details when they ask the developer to do something, the same way they can when they ask the machine to do it.
> the dream underneath this dream is about being able to manifest things into reality without having to get into the details.
Yes, it has nothing to do with dev specifically, dev "just" happens to be how to do so while being text based, which is the medium of LLMs. What also "just" happens to be convenient is that dev is expensive, so if a new technology might help to make something possible and/or make it unexpensive, it's potentially a market.
Now pesky details like actual implementation, who got time for that, it's just few more trillions away.
> In the end, I think the dream underneath this dream is about being able to manifest things into reality without having to get into the details.
> The details are what stops it from working in every form it's been tried.
Since the author was speaking to business folk, I would argue that their dream is cheaper labor, or really just managing a line item in the summary budget. As evidenced by outsourcing efforts. I don't think they really care about how it happens - whether it is manifesting things into reality without having to get into the details, or just a cheaper human. It seems to me that the corporate fever around AI is simply the prospect of a "cheaper than human" opportunity.
Although, to your point, we must await AGI, or get very close to it, to be able to manifest things into reality without having to get into the details :-)
[dead]
For me what supports this are things outside of software. If a company or regime wants to build something, they can't just say what they want and get exactly what they envision. If human minds can't figure out what other human wants, how could a computer do it?
> Conversely, for the most successful companies the opposite is true.
While I agree with this, I think that it’s important to acknowledge that even if you did everything well and thought of everything in detail, you can still fail for reasons that are outside of your control. For example, a big company buying from your competitor who didn’t do a better job than you simply because they were mates with the people making the decision… that influences everyone else and they start, with good reason, to choose your competitor just because it’s now the “standard” solution, which itself has value and changes the picture for potential buyers.
In other words, being the best is not guarantee for success.
I don't think the recurring failure is that we haven't found the right abstraction yet. It's that abstraction is often mistaken for understanding
> The recurring dream of replacing developers > In the end, I think the dream underneath this dream is about being able to manifest things into reality without having to get into the details.
It's basically this:
"I'm hungry. I want to eat."
"Ok. What do you want?"
"I don't know. Read my mind and give me the food I will love."
This matches what keeps repeating. Tools change where the work happens, but they don’t remove the need for controlled decisions about inputs, edge cases, and outcomes. When that workflow isn’t explicit, every new abstraction feels like noise.
Well said. This dream is probably for someone who have experienced the hardship, felt frustrated and gave up. Then see others who effortless did it, even felt fun for them. The manifestation of the dream feels like revenge to them.
This framing neatly explains the hubris of the influencer-wannabes on social media who have time to post endlessly about how AI is changing software dev forever while also having never shipped anything themselves.
They want to be seen as competent without the pound of flesh that mastery entails. But AI doesn’t level one’s internal playing field.
My mantra as an engineer is "Devil is in the details".
For 2 almost identical problems, having a little diference between them, the solutions can be radically different in complexity, price & time to deliver.
This is the most correct comment I've ever come across.
Yeah this is a thought provoking framing. Maybe the way in which those of us who really enjoy programming are weird is that we relish meticulously figuring out those details.
This, 100%. Semicolons and curly brackets aren’t the hard part of developing software. The hard part is figuring out what you want in the first place.
Yes. On an analogy level you can also examine whether you are in a dream or in reality by looking exactly to such details :)
You write so well that I'm convinced by your words, even if you are wrong. Do you write professionally?
I really appreciate the compliment, thank you! I don't.
Brevity is the soul of wit, you did well sir.
It's essentially the same as an engineering lead writing a good jira ticket
It looks there's a difference this time: copying the details of other people's work has become exceedingly easy and reliable, at least for commonly tried use cases. Say I want to vibe code a dashboard, and AI codes it out. It works. In fact, it works so much better than I could ever build, because the AI was trained with the best dashboard code out there. Yes, I can't think of all the details of a world-class dashboard, but hey, someone else did and AI correctly responds to my prompt with those details. Such "copying" used to be really hard among humans. Without AI, I would have to learn so much first even if I can use the open-source code as the starting point: the APIs of the libraries, the basic concepts of web programming, and etc. Yet, the AI doesn't care. It's just a gigantic Bayesian machine that emits code that nearly probability 1 for common use cases.
So it is not that details don't matter, but that now people can easily transfer certain know-how from other great minds. Unfortunately (or fortunately?), most people's jobs are learning and replicating know-hows from others.
But the dashboard is not important at all, because everyone can have the same dashboard the same way you have it. It's like you are generating a static website using Hugo and apply a theme provided on it. The end product you get is something built by a streamline. No taste, no soul, no effort. (Of course, the effort is behind the design and produce of the streamline, but not the product produced by the streamline.)
Now, if you want to use the dashboard do something else really brilliant, it is good enough for means. Just make sure the dashboard is not the end.
Dashboard is just an example. The gist is how much of know-how that we use in our work can be replaced by AI transforming other people's existing work. I think it hinges on how many new problems or new business demands will show up. If we just work on small variations of existing business, then quickly our know-hows will converge (e.g. building a dashboard or a vanilla version of linear regression model), and AI will spew out such code for many of us.
I don't think anyone's job is copying "know-how". Knowing how goes a lot deeper than writing the code.
Especially in web, boilerplate/starters/generators that do exactly what you want with little to no code or familiarity has been the norm for at least a decade. This is the lifeblood of repos like npm.
What we have is better search for all this code and documentation that was already freely available and ready to go.
manifest-driven development.
Just put the intention out there in the universe and the universe will answer!
To put an economic spin on this (that no one asked for), this is also the capitalist nirvana. I don't have an immediate citation but from my experience software engineer salary is usually one of the biggest items on a P&L which prevents the capitalist approaching the singularity: limitless profit margin. Obviously this is unachievable but one of the major obstacles to this is in the process of being destablised and disrupted.
in the most profitable / high margin software industry, what other major costs are there?
Sam Altman’s real job is pushing AI hopium on execs who will believe anything in pursuit of that nirvana.
Which is hilarious, because AI is making it easier and easier to bring a good idea to market with much less external financing than usual.
You can argue about security, reliability, and edge cases, but it's not as if human devs have a perfect record there.
Or even a particularly good one.
What are those execs bringing to the table, beyond entitlement and self-belief?
1 reply →
The argument is empty because it relies on a trope rather than evidence. “We’ve seen this before and it didn’t happen” is not analysis. It’s selective pattern matching used when the conclusion feels safe. History is full of technologies that tried to replace human labor and failed, and just as full of technologies that failed repeatedly and then abruptly succeeded. The existence of earlier failures proves nothing in either direction.
Speech recognition was a joke for half a century until it wasn’t. Machine translation was mocked for decades until it quietly became infrastructure. Autopilot existed forever before it crossed the threshold where it actually mattered. Voice assistants were novelty toys until they weren’t. At the same time, some technologies still haven’t crossed the line. Full self driving. General robotics. Fusion. History does not point one way. It fans out.
That is why invoking history as a veto is lazy. It is a crutch people reach for when it’s convenient. “This happened before, therefore that’s what’s happening now,” while conveniently ignoring that the opposite also happened many times. Either outcome is possible. History alone does not privilege the comforting one.
If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals. The slope matters more than anecdotes. The relevant question is not whether this resembles CASE tools. It’s what the world looks like if this curve runs for five more years. The conclusion is not subtle.
The reason this argument keeps reappearing has little to do with tools and everything to do with identity. People do not merely program. They are programmers. “Software engineer” is a marker of intelligence, competence, and earned status. It is modern social rank. When that rank is threatened, the debate stops being about productivity and becomes about self preservation.
Once identity is on the line, logic degrades fast. Humans are not wired to update beliefs when status is threatened. They are wired to defend narratives. Evidence is filtered. Uncertainty is inflated selectively. Weak counterexamples are treated as decisive. Strong signals are waved away as hype. Arguments that sound empirical are adopted because they function as armor. “This happened before” is appealing precisely because it avoids engaging with present reality.
This is how self delusion works. People do not say “this scares me.” They say “it’s impossible.” They do not say “this threatens my role.” They say “the hard part is still understanding requirements.” They do not say “I don’t want this to be true.” They say “history proves it won’t happen.” Rationality becomes a costume worn by fear. Evolution optimized us for social survival, not for calmly accepting trendlines that imply loss of status.
That psychology leaks straight into the title. Calling this a “recurring dream” is projection. For developers, this is not a dream. It is a nightmare. And nightmares are easier to cope with if you pretend they belong to someone else. Reframe the threat as another person’s delusion, then congratulate yourself for being clear eyed. But the delusion runs the other way. The people insisting nothing fundamental is changing are the ones trying to sleep through the alarm.
The uncomfortable truth is that many people do not stand to benefit from this transition. Pretending otherwise does not make it false. Dismissing it as a dream does not make it disappear. If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even when the destination is not one you want to visit.
Forgive me if I'm wrong, but my AI spidey sense is tingling...
> “We’ve seen this before and it didn’t happen” is not analysis. It’s selective pattern matching used when the conclusion feels safe.
> If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue.
Wait, so we can infer the future from "trendlines", but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias...
I would argue that data points that are barely a few years old, and obscured by an unprecedented hype cycle and gold rush, are not reliable predictors of anything. The safe approach would be to wait for the market to settle, before placing any bets on the future.
> Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
> The reason this argument keeps reappearing has little to do with tools and everything to do with identity.
Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
I don't get how anyone can speak about trends and what's currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
>Wait, so we can infer the future from “trendlines”, but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias…
If past events can be dismissed as “noise,” then so can selectively chosen counterexamples. Either historical outcomes are legitimate inputs into a broader signal, or no isolated datapoint deserves special treatment. You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.
When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
>What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
These are legitimate questions, and they are all speculative. My expectation is that code quality will decline while simultaneously becoming less relevant. As LLMs ingest and reason over ever larger bodies of software, human oriented notions of cleanliness and maintainability matter less. LLMs are far less constrained by disorder than humans are.
>Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
The flaws are obvious. So obvious that repeatedly pointing them out is like warning that airplanes can crash while ignoring that aviation safety has improved to the point where you are far more likely to die in a car than in a metal tube moving at 500 mph.
Everyone knows LLMs hallucinate. That is not contested. What matters is the direction of travel. The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.
That is the real disagreement. Critics focus on present day limitations. Proponents focus on the trajectory. One side freezes the system in time; the other extrapolates forward.
>I don’t get how anyone can speak about trends and what’s currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
Because many skeptics are ignoring what is directly observable. You can watch AI generate ultra complex, domain specific systems that have never existed before, in real time, and still hear someone dismiss it entirely because it failed a prompt last Tuesday.
Repeating the limitations is not analysis. Everyone who is not a skeptic already understands them and has factored them in. What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation.
At that point, the disagreement stops being about evidence and starts looking like bias.
9 replies →
> What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
My dude, I just want to point out that there is no evidence of any of this, and a lot of evidence of the opposite.
> If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even
You first, lol.
> This is how self delusion works
Yeah, about that...
“There is no evidence” is not skepticism. It’s abdication. It’s what people say when they want the implications to go away without engaging with anything concrete. If there is “a lot of evidence of the opposite,” the minimum requirement is to name one metric, one study, or one observable trend. You didn’t. You just asserted it and moved on, which is not how serious disagreement works.
“You first, lol” isn’t a rebuttal either. It’s an evasion. The claim was not “the labor market has already flipped.” The claim was that AI-assisted coding has changed individual leverage, and that extrapolating that change leads somewhere uncomfortable. Demanding proof that the future has already happened is a category error, not a clever retort.
And yes, the self-delusion paragraph clearly hit, because instead of addressing it, you waved vaguely and disengaged. That’s a tell. When identity is involved, people stop arguing substance and start contesting whether evidence is allowed to count yet.
Now let’s talk about evidence, using sources who are not selling LLMs, not building them, and not financially dependent on hype.
Martin Fowler has explicitly written about AI-assisted development changing how code is produced, reviewed, and maintained, noting that large portions of what used to be hands-on programmer labor are being absorbed by tools. His framing is cautious, but clear: AI is collapsing layers of work, not merely speeding up typing. That is labor substitution at the task level.
Kent Beck, one of the most conservative voices in software engineering, has publicly stated that AI pair-programming fundamentally changes how much code a single developer can responsibly produce, and that this alters team dynamics and staffing assumptions. Beck is not bullish by temperament. When he says the workflow has changed, he means it.
Bjarne Stroustrup has explicitly acknowledged that AI-assisted code generation changes the economics of programming by automating work that previously required skilled human attention, while also warning about misuse. The warning matters, but the admission matters more: the work is being automated.
Microsoft Research, which is structurally separated from product marketing, has published peer-reviewed studies showing that developers using AI coding assistants complete tasks significantly faster and with lower cognitive load. These papers are not written by executives. They are written by researchers whose credibility depends on methodological restraint, not hype.
GitHub Copilot’s controlled studies, authored with external researchers, show measurable increases in task completion speed, reduced time-to-first-solution, and increased throughput. You can argue about long-term quality. You cannot argue “no evidence” without pretending these studies don’t exist.
Then there is plain, boring observation.
AI-assisted coding is directly eliminating discrete units of programmer labor: boilerplate, CRUD endpoints, test scaffolding, migrations, refactors, first drafts, glue code. These were not side chores. They were how junior and mid-level engineers justified headcount. That work is disappearing as a category, which is why junior hiring is down and why backfills quietly don’t happen.
You don’t need mass layoffs to identify a structural shift. Structural change shows up first in roles that stop being hired, positions that don’t get replaced, and how much one person can ship. Waiting for headline employment numbers before acknowledging the trend is mistaking lagging indicators for evidence.
If you want to argue that AI-assisted coding will not compress labor this time, that’s a valid position. But then you need to explain why higher individual leverage won’t reduce team size. Why faster idea-to-code cycles won’t eliminate roles. Why organizations will keep paying for surplus engineering labor when fewer people can deliver the same output.
But “there is no evidence” isn’t a counterargument. It’s denial wearing the aesthetic of rigor.
6 replies →
I think a really good takeaway is that we're bad at predicting the future. That is the most solid prediction of history. Before we thought speech recognition was impossible, we thought it would be easy. We thought a lot of problems would be easy, and it turned out a lot of them were not. We thought a lot of problems would be hard, and we use those technologies now.
Another lesson history has taught us though, is that people don't defend narratives, they defend status. Not always successfully. They might not update beliefs, but they act effectively, decisively and sometimes brutally to protect status. You're making an evolutionary biology argument (which is always shady!) but people see loss of status as an existential threat, and they react with anger, not just denial.
“The existence of earlier failures proves nothing in either direction.”
This seems extreme and obviously incorrect.
[dead]