Comment by cmiles8
14 hours ago
This is the elephant in the room nobody wants to talk about. AI is dead in the water for the supposed mass labor replacement that will happen unless this is fixed.
Summarize some text while I supervise the AI = fine and a useful productivity improvement, but doesn’t replace my job.
Replace me with an AI to make autonomous decisions outside in the wild and liability-ridden chaos ensues. No company in their right mind would do this.
The AI companies are now in a extinctential race to address that glaring issue before they run out of cash, with no clear way to solve the problem.
It’s increasingly looking like the current AI wave will disrupt traditional search and join the spell-checker as a very useful tool for day to day work… but the promised mass labor replacement won’t materialize. Most large companies are already starting to call BS on the AI replacing humans en-mass storyline.
Part of the problem is the word "replacement" kills nuanced thought and starts to create a strawman. No one will be replaced for a long time, but what happens will depend on the shape of the supply and demand curves of labor markets.
If 8 or 9 developers can do the work of 10, do companies choose to build 10% more stuff? Do they make their existing stuff 10% better? Or are they content to continue building the same amount with 10% fewer people?
In years past, I think they would have chosen to build more, but today I think that question has a more complex answer.
There’s a middle road where AI replaces half the juniors or entry level roles, the interns and the bottom rung of the org chart.
In marketing, an AI can effortlessly perform basic duties, write email copy, research, etc. Same goes for programming, graphic design, translation, etc.
The results will be looked over by a senior member, but it’s already clear that a role with 3 YOE or less could easily be substituted with an AI. It’ll be more disruptive than spell check, clearly, even if it doesn’t wipe it 50% of the labor market: even 10% would be hugely disruptive.
Not really though:
1. Companies like savings but they’re not dumb enough to just wipe out junior roles and shoot themselves in the foot for future generations of company leaders. Business leaders have been vocal on this point and saying it’s terrible thinking.
2. In the US and Europe the work most ripe for automation and AI was long since “offshored” to places like India. If AI does have an impact it will wipe out the India tech and BPO sector before it starts to have a major impact on roles in the US and Europe.
1) Companies are dumb enough to shoot themselves in the foot over a single quarter's financials - they certainly aren't thinking about where their middle management is going to come from in 5 or 10 years.
2) There's plenty of work ripe for automation that's currently being done by recent US grads. I don't doubt offshored roles will also be affected, but there's nothing special about the average entry-level candidate from a state school that'll make them immune to the same trends.
To think companies worry about protecting the talent supply chain is to put your fingers in your ears and ignore your eyes for the past 5-10 years. We were already in a crisis of seniority where every single role was “senior only” and AI is only going to increase that.
2 replies →
1. Sure they will! It's a prisoner's dilemma. Each individual company is incentivized to minimize labor costs. Who wants to be the company who pays extra for humans in junior roles and then gets that talent poached away?
2 Yes, absolutely.
1 reply →
As far as 1 goes, how do you explain American deindustrilization and e. g. its auto industry.
I think you're really overstating things here. Entry level positions are the tier at which replacement of senior positions happen. They don't do a lot, sure, but they are cheap and easily churnable. This is precisely NOT the place companies focus on for cutbacks or downsizing. AI being acceptable at replacing unskilled labor doesn't mean it WILL replace it. It has to make business sense to implement it.
If they're cheap and churnable, they're also the easiest place to see substitution.
Pre-AI, Company A hired 3 copywriters a year for their marketing team. Post-AI, they hire 1 who manages some prompting and makes some spot-tweaks, saving $80K a year and improving the turnaround time on deliverables.
My original comment isn't saying the company is going to fire the 3 copywriters on staff, but any company looking at hiring entry-level roles for tasks that AI is already very good at would be silly to not adjust their plans accordingly.
It doesn’t have to replace us, just make us more productive.
Software is demand constrained, not supply constrained. Demand for novel software is down, we already have tons of useful software for anything you can think of. Most developers at google, Microsoft, meta, Amazon, etc barely do anything. Productivity is approaching zero. Hence why the corporations are already outsourcing.
The number of workers needed will go down.
The narrative about AI replacing humans is just a way to say 'we became 2x more productive' instead of saying 'we cut 50% jobs', which sounds better for investors. The real reason for job cut is COVID overhiring plus interest rate going up. If you remember, Twitter did the job cuts without any AI-related narrative.
Well done sir, you seem to think with a clear mind.
Why do you think you are able to evade the noise, whilst others seem not to? IM genuinely curious. Im convinced its down to the fact that the people 'who get it' have a particular way of thinking that others dont.
1 you are massively assuming less than linear improvement, even linear over 5 years puts LLM in different category
2 more efficient means need less people means redundancy means cycle of low demand
OK. Let's take what you've stated as a truth.
So where is the labor force replacement option on Anthropic's website? Dario isn't shy about these enormous claims of replacing humans. He's made the claim yet shows zero proof. But if Anthropic could replace anyone reliably, today why would they let you or I take that revenue? I mean they are the experts, right? The reality is these "improvements" metrics are built in sand. They mean nothing and are marketing. Show me any model replacing a receptionist today. Trivial, they say, yet they can't do it reliably. AND... It costs more at these subsidized prices.
1 it has nothing to do with 'improvement'. You can improve it to be a little less susceptible to injection attacks but that's not the same as solving it. If only 0.1% of the time it wires all your money to a scammer, are you going to be satisfied with that level of "improvement"?
LLMs haven't been improving for years.
Despite all the productizing and the benchmark gaming, fundamentally all we got is some low-hanging performance improvements (MoE and such).
And why would it materialize? Anyone who has used even modern models like Opus 4.6 in very long and extensive chats about concrete topics KNOWS that this LLM form of Artificial Intelligence is anything but intelligent.
You can see the cracks happening quite fast actually and you can almost feel how trained patterns are regurgitated with some variance - without actually contextualizing and connecting things. More guardrailing like web sources or attachments just narrow down possible patterns but you never get the feeling that the bot understands. Your own prompting can also significantly affect opinions and outcomes no matter the factual reality.
The great irony is this episode is exposing those who are truly intelligent and those who are not.
Folks feel free to screenshot this ;)
It sure did: I never thought I would abandon Google Search, but I have, and it's the AI elements that have fundamentally broken my trust in what I used to take very much for granted. All the marketing and skewing of results and Amazon-like lying for pay didn't do it, but the full-on dive into pure hallucination did.