Comment by threethirtytwo
1 month ago
The argument is empty because it relies on a trope rather than evidence. “We’ve seen this before and it didn’t happen” is not analysis. It’s selective pattern matching used when the conclusion feels safe. History is full of technologies that tried to replace human labor and failed, and just as full of technologies that failed repeatedly and then abruptly succeeded. The existence of earlier failures proves nothing in either direction.
Speech recognition was a joke for half a century until it wasn’t. Machine translation was mocked for decades until it quietly became infrastructure. Autopilot existed forever before it crossed the threshold where it actually mattered. Voice assistants were novelty toys until they weren’t. At the same time, some technologies still haven’t crossed the line. Full self driving. General robotics. Fusion. History does not point one way. It fans out.
That is why invoking history as a veto is lazy. It is a crutch people reach for when it’s convenient. “This happened before, therefore that’s what’s happening now,” while conveniently ignoring that the opposite also happened many times. Either outcome is possible. History alone does not privilege the comforting one.
If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals. The slope matters more than anecdotes. The relevant question is not whether this resembles CASE tools. It’s what the world looks like if this curve runs for five more years. The conclusion is not subtle.
The reason this argument keeps reappearing has little to do with tools and everything to do with identity. People do not merely program. They are programmers. “Software engineer” is a marker of intelligence, competence, and earned status. It is modern social rank. When that rank is threatened, the debate stops being about productivity and becomes about self preservation.
Once identity is on the line, logic degrades fast. Humans are not wired to update beliefs when status is threatened. They are wired to defend narratives. Evidence is filtered. Uncertainty is inflated selectively. Weak counterexamples are treated as decisive. Strong signals are waved away as hype. Arguments that sound empirical are adopted because they function as armor. “This happened before” is appealing precisely because it avoids engaging with present reality.
This is how self delusion works. People do not say “this scares me.” They say “it’s impossible.” They do not say “this threatens my role.” They say “the hard part is still understanding requirements.” They do not say “I don’t want this to be true.” They say “history proves it won’t happen.” Rationality becomes a costume worn by fear. Evolution optimized us for social survival, not for calmly accepting trendlines that imply loss of status.
That psychology leaks straight into the title. Calling this a “recurring dream” is projection. For developers, this is not a dream. It is a nightmare. And nightmares are easier to cope with if you pretend they belong to someone else. Reframe the threat as another person’s delusion, then congratulate yourself for being clear eyed. But the delusion runs the other way. The people insisting nothing fundamental is changing are the ones trying to sleep through the alarm.
The uncomfortable truth is that many people do not stand to benefit from this transition. Pretending otherwise does not make it false. Dismissing it as a dream does not make it disappear. If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even when the destination is not one you want to visit.
Forgive me if I'm wrong, but my AI spidey sense is tingling...
> “We’ve seen this before and it didn’t happen” is not analysis. It’s selective pattern matching used when the conclusion feels safe.
> If you want to argue seriously, you have to start with ground truth. What is happening now. What the trendlines look like. What follows if those trendlines continue.
Wait, so we can infer the future from "trendlines", but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias...
I would argue that data points that are barely a few years old, and obscured by an unprecedented hype cycle and gold rush, are not reliable predictors of anything. The safe approach would be to wait for the market to settle, before placing any bets on the future.
> Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
> The reason this argument keeps reappearing has little to do with tools and everything to do with identity.
Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
I don't get how anyone can speak about trends and what's currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
>Wait, so we can infer the future from “trendlines”, but not from past events? Either past events are part of a macro trend, and are valuable data points, or the micro data points you choose to focus on are unreliable as well. Talk about selection bias…
If past events can be dismissed as “noise,” then so can selectively chosen counterexamples. Either historical outcomes are legitimate inputs into a broader signal, or no isolated datapoint deserves special treatment. You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.
When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
>What is hypothetical is what will happen to all this software and the companies that produced it a few years down the line. How reliable is it? How maintainable is it? How many security issues does it have? What has the company lost because those issues were exploited? Will the same people who produced it using these new tools be able to troubleshoot and fix it? Will the tools get better to allow them to do that?
These are legitimate questions, and they are all speculative. My expectation is that code quality will decline while simultaneously becoming less relevant. As LLMs ingest and reason over ever larger bodies of software, human oriented notions of cleanliness and maintainability matter less. LLMs are far less constrained by disorder than humans are.
>Really? Everything? There is no chance that some people are simply pointing out the flaws of this technology, and that the marketing around it is making it out to be far more valuable than it actually is, so that a bunch of tech grifters can add more zeroes to their net worth?
The flaws are obvious. So obvious that repeatedly pointing them out is like warning that airplanes can crash while ignoring that aviation safety has improved to the point where you are far more likely to die in a car than in a metal tube moving at 500 mph.
Everyone knows LLMs hallucinate. That is not contested. What matters is the direction of travel. The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.
That is the real disagreement. Critics focus on present day limitations. Proponents focus on the trajectory. One side freezes the system in time; the other extrapolates forward.
>I don’t get how anyone can speak about trends and what’s currently happening with any degree of confidence. Let alone dismiss the skeptics by making wild claims about their character. Do better.
Because many skeptics are ignoring what is directly observable. You can watch AI generate ultra complex, domain specific systems that have never existed before, in real time, and still hear someone dismiss it entirely because it failed a prompt last Tuesday.
Repeating the limitations is not analysis. Everyone who is not a skeptic already understands them and has factored them in. What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation.
At that point, the disagreement stops being about evidence and starts looking like bias.
Respectfully, you seem to love the sound of your writing so much you forget what you are arguing about. The topic (at least for the rest of the people in this thread) seems to be whether AI assistance can truly eliminate programmers.
There is one painfully obvious, undeniable historical trend: making programmer work easier increases the number of programmers. I would argue a modern developer is 1000x more effective than one working in the times of punch cards - yet we have roughly 1000x more software developers than back then.
I'm not an AI skeptic by any means, and use it everyday at my job where I am gainfully employed to develop production software used by paying customers. The overwhelming consensus among those similar to me (I've put down all of these qualifiers very intentionally) is that the currently existing modalities of AI tools are a massive productivity boost mostly for the "typing" part of software (yes, I use the latest SOTA tools, Claude Opus 4.5 thinking, blah, blah, so do most of my colleagues). But the "typing" part hasn't been the hard part for a while already.
You could argue that there is a "step change" coming in the capabilities of AI models, which will entirely replace developers (so software can be "willed into existence", as elegantly put by OP), but we are no closer to that point now than we were in December 2022. All the success of AI tools in actual, real-world software has been in tools specifically design to assist existing, working, competent developers (e.g. Cursor, Claude Code), and the tools which have positioned themselves to replace them have failed (Devin).
3 replies →
> The trendline is clear. Just as early aviation was dangerous but steadily improved, this technology is getting better month by month.
I'm yet to be convinced of this. I keep hearing it, but every time I look at the results they're basically garbage.
I think LLMs are useful tools, but I haven't seen anything convincing that they will be able to replace even junior developers any time soon.
2 replies →
> You cannot appeal to trendlines while arbitrarily discarding the very history that defines them without committing selection bias.
> When large numbers of analogous past events point in contradictory directions, individual anecdotes lose predictive power. Trendlines are not an oracle, but once the noise overwhelms the signal, they are the best approximation we have.
I'm confused. So you're agreeing with me, up until the very last part of the last sentence...? If the "noise overwhelms the signal", why are "trendlines the best approximation we have"? We have reliable data of past outcomes in similar scenarios, yet the most recent noisy data is the most valuable? Huh?
(Honestly, your comments read suspiciously like they were LLM-generated, as others have mentioned. It's like you're jumping on specific keywords and producing the most probable tokens without any thought about what you're saying. I'll give you the benefit of the doubt for one more reply, though.)
To be fair, I think this new technology is fundamentally different from all previous attempts at abstracting software development. And I agree with you that past failures are not necessarily indicative that this one will fail as well. But it would be foolish to conclude anything about the value of this technology from the current state of the industry, when it should be obvious to anyone that we're in a bull market fueled by hype and speculation.
What you're doing is similar to speculative takes during the early days of the internet and WWW. How it would transform politics, end authoritarianism and disinformation, and bring the world together. When the dust settled after the dot-com crash, actual value of the technology became evident, and it turns out that none of the promises of social media became true. Quite the opposite, in fact. That early optimism vanished along the way.
The same thing happened with skepticism about the internet being a fad, that e-commerce would never work, and so on. Both groups were wrong.
> What skeptics keep doing is reciting known flaws while refusing to reason about what is no longer a limitation. At that point, the disagreement stops being about evidence and starts looking like bias.
Skepticism and belief are not binary states, but a spectrum. At extreme ends there are people who dismiss the technology altogether, and there are people who claim that the technology will cure diseases, end poverty, and bring world prosperity[1].
I think neither of these viewpoints are worth paying attention to. As usual, the truth is somewhere in the middle. I'm leaning towards the skeptic side simply because the believers are far louder, more obnoxious, and have more to gain from pushing their agenda. The only sane position at this point is to evaluate the technology based on personal use, discuss your experience with other rational individuals, and wait for the hype to die down.
[1]: https://ai-2027.com/
1 reply →
> What is happening now. What the trendlines look like. What follows if those trendlines continue. Output per developer is rising. Time from idea to implementation is collapsing. Junior and mid level work is disappearing first. Teams are shipping with fewer people. These are not hypotheticals.
My dude, I just want to point out that there is no evidence of any of this, and a lot of evidence of the opposite.
> If you want to engage honestly, you stop citing the past and start following the numbers. You accept where the trendlines lead, even
You first, lol.
> This is how self delusion works
Yeah, about that...
“There is no evidence” is not skepticism. It’s abdication. It’s what people say when they want the implications to go away without engaging with anything concrete. If there is “a lot of evidence of the opposite,” the minimum requirement is to name one metric, one study, or one observable trend. You didn’t. You just asserted it and moved on, which is not how serious disagreement works.
“You first, lol” isn’t a rebuttal either. It’s an evasion. The claim was not “the labor market has already flipped.” The claim was that AI-assisted coding has changed individual leverage, and that extrapolating that change leads somewhere uncomfortable. Demanding proof that the future has already happened is a category error, not a clever retort.
And yes, the self-delusion paragraph clearly hit, because instead of addressing it, you waved vaguely and disengaged. That’s a tell. When identity is involved, people stop arguing substance and start contesting whether evidence is allowed to count yet.
Now let’s talk about evidence, using sources who are not selling LLMs, not building them, and not financially dependent on hype.
Martin Fowler has explicitly written about AI-assisted development changing how code is produced, reviewed, and maintained, noting that large portions of what used to be hands-on programmer labor are being absorbed by tools. His framing is cautious, but clear: AI is collapsing layers of work, not merely speeding up typing. That is labor substitution at the task level.
Kent Beck, one of the most conservative voices in software engineering, has publicly stated that AI pair-programming fundamentally changes how much code a single developer can responsibly produce, and that this alters team dynamics and staffing assumptions. Beck is not bullish by temperament. When he says the workflow has changed, he means it.
Bjarne Stroustrup has explicitly acknowledged that AI-assisted code generation changes the economics of programming by automating work that previously required skilled human attention, while also warning about misuse. The warning matters, but the admission matters more: the work is being automated.
Microsoft Research, which is structurally separated from product marketing, has published peer-reviewed studies showing that developers using AI coding assistants complete tasks significantly faster and with lower cognitive load. These papers are not written by executives. They are written by researchers whose credibility depends on methodological restraint, not hype.
GitHub Copilot’s controlled studies, authored with external researchers, show measurable increases in task completion speed, reduced time-to-first-solution, and increased throughput. You can argue about long-term quality. You cannot argue “no evidence” without pretending these studies don’t exist.
Then there is plain, boring observation.
AI-assisted coding is directly eliminating discrete units of programmer labor: boilerplate, CRUD endpoints, test scaffolding, migrations, refactors, first drafts, glue code. These were not side chores. They were how junior and mid-level engineers justified headcount. That work is disappearing as a category, which is why junior hiring is down and why backfills quietly don’t happen.
You don’t need mass layoffs to identify a structural shift. Structural change shows up first in roles that stop being hired, positions that don’t get replaced, and how much one person can ship. Waiting for headline employment numbers before acknowledging the trend is mistaking lagging indicators for evidence.
If you want to argue that AI-assisted coding will not compress labor this time, that’s a valid position. But then you need to explain why higher individual leverage won’t reduce team size. Why faster idea-to-code cycles won’t eliminate roles. Why organizations will keep paying for surplus engineering labor when fewer people can deliver the same output.
But “there is no evidence” isn’t a counterargument. It’s denial wearing the aesthetic of rigor.
https://metr.org/blog/2025-07-10-early-2025-ai-experienced-o...
> If there is “a lot of evidence of the opposite,” the minimum requirement is to name one metric, one study, or one observable trend. You didn’t. You just asserted it and moved on, which is not how serious disagreement works.
I treated it with the amount of seriousness it deserves, and provided exactly as much evidence as you did lol. It's on you to prove your statement, not on me to disprove you.
Also, you still haven't provided the kind of evidence you say is necessary. None of the "evidence" you listed is actually evidence of mass change in engineering.
> AI-assisted coding is directly eliminating discrete units of programmer labor: boilerplate, CRUD endpoints, test scaffolding, migrations, refactors, first drafts, glue code.
You are not a professional engineer lol, because most of those things are already automated and have been for decades. What on earth do you think we do every day?
3 replies →
Is it just me or does anyone else get strong LLM vibes from the way this is written?
I think a really good takeaway is that we're bad at predicting the future. That is the most solid prediction of history. Before we thought speech recognition was impossible, we thought it would be easy. We thought a lot of problems would be easy, and it turned out a lot of them were not. We thought a lot of problems would be hard, and we use those technologies now.
Another lesson history has taught us though, is that people don't defend narratives, they defend status. Not always successfully. They might not update beliefs, but they act effectively, decisively and sometimes brutally to protect status. You're making an evolutionary biology argument (which is always shady!) but people see loss of status as an existential threat, and they react with anger, not just denial.
“The existence of earlier failures proves nothing in either direction.”
This seems extreme and obviously incorrect.