Comment by oxag3n
6 days ago
> We're thinking about AI wrong.
And this write up is not an exception.
Why even bother thinking about AI, when Anthropic and OpenAI CEOs openly tell us what they want (quote from recent Dwarkesh interview) - "Then further down the spectrum, there’s 90% less demand for SWEs, which I think will happen but this is a spectrum."
So save thinking and listen to intent - replace 90% of SWEs in near future (6-12 months according to Amodei).
I don't think anyone serious believes this. Replacing developers with a less costly alternative is obviously a very market bullish dream, it has existed since as long as I've worked in the field. First it was supposed to be UML generated code by "architects", then it was supposed to be developers from developing countries, then no-code frameworks, etc.
AI will be a tool, no more no less. Most likely a good one, but there will still need to be people driving it, guiding it, fixing for it, etc.
All these discourses from CEO are just that, stock market pumping, because tech is the most profitable sector, and software engineers are costly, so having investors dream about scale + less costs is good for the stock price.
Ah, don't take me wrong - I don't believe it's possible for LLMs to replace 90% or any number of SWEs with existing technology.
All I'm saying is - why to think what AI is (exoskeleton, co-worker, new life form), when its owners intent is to create SWE replacement?
If your neighbor is building a nuclear reactor in his shed from a pile of smoke detectors, you don't say "think about this as a science experiment" because it's impossible, just call police/NRC because of intent and actions.
> If your neighbor is building a nuclear reactor in his shed from a pile of smoke detectors, you don't say "think about this as a science experiment" because it's impossible, just call police/NRC because of intent and actions.
Only if you're a snitch loser
If you gave the LLM your carefully written UML maybe its output would be better lol. That’s what we’re missing, a mashup of the hype cycle tools.
Not without some major breakthrough. What's hilarious is that all these developers building the tools are going to be the first to be without jobs. Their kids will be ecstatic: "Tell me again, dad, so, you had this awesome and well paying easy job and you wrecked it? Shut up kid, and tuck in that flap, there is too much wind in our cardboard box."
Couldn't agree more, isn't that the bizarre thing? "We have this great intellectually challenging job where we as workers have leverage. How can we completely ruin that while also screwing up every other white collar profession"
Why is it bizarre? It is inevitable. After all, AI has not ruined creative professions, it merely disrupted and transformed them. And yes, I fully understand my whole comment here being snarky, but please bear with me.
Let's rewind 4 years to this HN article titled "The AI Art Apocalypse": https://news.ycombinator.com/item?id=34856326
> [...] Artists will still exist, but most likely as hybrid 3d-modellers, AI modelers (Not full programmers, but able to fine-tune models with online guides and setups, can read basic python), and storytellers (like manga artists). It'll be a higher-pay, higher-prestige, higher-skill-requirement job than before. And all those artists who devoted their lives to draw better, find this to be an incredibly brutal adjustment.
Again, replace "Artists" with coders and fill in the replacement.
So, please get in line and adapt. And stop clinging to your "great intellectually challenging job" because you are holding back progress. It can't be that challenging if it can be handled by a machine anyway.
7 replies →
I have a feeling they internally say "not me, I won't be replaced" and just keep moving...
Or they get FY money and fatFIRE.
1 reply →
I'm assuming they all have enough equity that if they actually managed to build an AI capable of replacing themselves they'll be financially set for the rest of their lives.
"Well son, we made a lot of shareholder value."
Is it the first time when workers directly work on their own replacement? If so, software developer may go down in history as the dumbest profession ever.
If the goal is to reduce the need for SWE, you don’t need AI for that. I suspect I’m not alone in observing how companies are often very inefficient, so that devs end up spending a lot of time on projects of questionable value—something that seems to happen more often the larger the organization. I recall at one job my manager insisted I delegate building a react app for an internal tool to a team of contractors rather than letting me focus for two weeks and knock it out myself.
It’s always the people management stuff that’s the hard part, but AI isn’t going to solve that. I don’t know what my previous manager’s deal was, but AI wouldn’t fix it.
The funny thing is I think these things would work much better if they WEREN'T so insistent on the agentic thing. Like, I find in-IDE AI tools a lot more precise and I usually move just as fast as a TUI with a lot less rework. But Claude is CONSTANTLY pushing me to try to "one shot" a big feature while asking me for as little context as possible. I'd much rather it work with me as opposed to just wandering off and writing a thousand lines. It's obviously designed for anthropic's best interests rather than mine.
Tell it to ask clarifying questions.
I do. But, there's a lot of annoying things about it being a TUI. I can't select a block of text in my editor and ask it to do something with it. It doesn't know what I'm looking at. Giving it context feels imprecise because I'm writing out filenames by hand instead of referencing them with the tools. A lot of other small things that I find are better in an IDE
Where is this "90% less demand for SWEs" going to come from? Are we going to run out software to write?
Historically when SWEs became more efficient then we just started making more complicated software (and SWE demand actually increased).
That happens in times of bullish markets and growing economies. Then we want a lot of SWEs.
In times of uncertainty and things going south, that changes to we need as little SWEs as possible, hence the current narrative, everyone is looking to cut costs.
Had GPT 3 emerged 10-20 years ago, the narrative would be “you can now do 100x more thanks to AI”.
I sort of agree the random pontification and bad analogies aren't super useful, but I'm not sure why you would believe the intent of the AI CEOs has more bearing on outcomes than, you know, actual utility over time. I mean those guys are so far out over their skis in terms of investor expectations, it's the last opinion I would take seriously in terms of best-effort predictions.