I pay $20 for OpenAI and codex makes me incredibly productive. With very careful prompts aimed at tiny tasks, I can review, fix and get a lot of things done.
I’ll happily pay up to $2k/month for it if I was left with no choice, but I don’t think it will ever get that expensive since you can run models locally and it could have the same result.
That being said, my outputs are similarish in the big picture. When I get something done, I typically don’t have the energy to keep going to get it to 2x or 3x because the cognitive load is about the same.
However I get a lot of time freed up which is amazing because I’m able to play golf 3-4 times a week which would have been impossible without AI.
Productive? Yes. Time saved? Yes. Overall outputs? Similar.
Same? Not quite as good as that. But google’s Gemma 3 27B is highly similar to their last Flash model. The latest Qwen3 variants are very good, to my need at least they are the best open coders, but really— here’s the thing:
There’s so many varieties, specialized to different tasks or simply different in performance.
Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.
For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.
To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.
The run at home was in the context of $2k/mo. At that price you can get your money back on self-hosted hardware at a much more reasonable pace compared to 20/mo (or even 200).
Well theres an open source GPT model you can run locally. I dont think running models locally is all that cheap considering top of the line GPUs used to be $300 now you are lucky if you get the best GPU for under $2000. The better models require a lot more VRAM. Macs can run them pretty decently but now you are spending $5000 plus you could have just bought a rig with a 5090 with mediocre desktop ram because Sam Altman has ruined the RAM pricing market.
I got some decent mileage out of aider and Gemma 27B. The one shot output was a little less good, but I don’t have to worry about paying per token or hitting plan limits so I felt more free to let it devise a plan, run it in a loop, etc.
Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.
Marx in his wildest nightmare couldn’t have anticipated the level selling short the working class does with the advent AI. Friend, you should be doing more than golf…
The title is a bit misleading. Reading the article, the argument seems to be that entry-level applicants (are expected to) have the highest AI literacy, so they want them to drive AI adoption.
At least today, I expect this will fail horribly. The challenge today isn't AI literacy in my experience, its domain knowledge required to keep LLMs on the rails.
I just run sub agents in parallel. Yesterday I used Codex for the first time yesterday. I spun up 350,640 agents and got 10 years of experience in 15 minutes.
Some stats are trickling out in my company. Code heavy consulting projects show about 18% efficiency gains but I have problems with that number because no one has been able to tell me how it was calculated. Story points actual vs estimated is probably how it was done but that’s nonsensical because we all know how subjective estimates and even actuals are. It’s probably impossible to get a real number that doesn’t have significant “well I feel about x% more efficient…”
More interesting imo would be a measure of maintainability. I've heard that code that's largely written by AI is rarely remembered by the engineer that submitted even a week after merging
You're almost "locked in" to using more AI on top of it then. It may also make it harder to give estimates to non-technical staff on how long it'd take to make a change or implement a new feature
I don’t know how to measure maintainability but the AI generated code I’ve seen in my projects is pretty plain vanilla standard patterns with comments. So less of a headache than a LOT of human code I’ve seen. Also, one thing the agents are good at, at least in my experience so far, is documenting existing code. This goes a long ways in maintenance, it’s not always perfect but as the saying goes documentation is like sex, when it’s good it’s great when it’s bad it’s better than nothing.
chasd00 did mention that this was for consulting projects, where presumably there's a handover to another team after a period of time. Maintainability was never a high priority for consultants.
This is a poor metric as soon as you reach a scale where you've hired an additional engineer, where 10% annual employee turnover reflects > 1 employee, much less the scale where a layoff is possible.
It's also only a hope as soon as you have dependencies that you don't directly manage like community libraries.
Hint: Make sure the people giving you the efficiency improvement numbers don't have a vested interest in giving you good numbers. If so, you can not trust the numbers.
Reminds me of my last job where the team that pushed React Native into the codebase were the ones providing the metrics for "how well" React Native was going. Ain't no chance they'd ever provide bad numbers.
Is this for their in-house development or for their consulting services?
Because the latter would still be indicative of AI hurting entry level hiring since it may signal that other firms are not really willing to hire a full time entry level employee whose job may be obsoleted by AI, and paying for a consultant from IBM may be a lower risk alternative in case AI doesn't pan out.
And if it is for consulting, I doubt very serious they will based in the US. You can’t be priced competitive hiring an entry level consultant in the US and no company is willing to pay the bill rate for US based entry level consultants unless their email address is @amazon.com or @google.com.
Source: current (full time) staff consultant at a third party cloud consulting firm and former consultant (full time) at Amazon.
One might ask what value seniors hold if their expertise of the junior stage is obsolete. Maybe the new junior will just be reigning in llm that does the work and senior level knowledge and compensation rots away as those people retire without replacement.
Another one? What is it with IBM, they must really save lots of money in a way no one else has figured out by firing people at 50yo. This is like the 3rd or 4th one i've heard from them.
No - it's that they fired their vets in high cost areas and kept them in low cost areas.
A large number of vets can now choose to reapply for their old job (or similar job) at a fraction of the price with their pension/benefits reduced and the vets in low cost centers now become the SMEs. In many places in the company they were not taken seriously due to both internal politics, but also quite a bit of performative "output" that either didn't do anything or had to be redone.
Nothing to do with AI - everything to do with Arvind Krishna. One of the reasons the market loves him, but the tech community doesn't necessarily take IBM seriously.
You know when someone is singing the praises about AI and they get asked "if you're so much more productive with AI, what have you built with it"? Well I think a bunch of companies are asking this same question to their employees and realising that the productivity gains they are betting on were overhyped.
LLM's can be a very useful tool and will probably lead to measurable productivity increases in the future, at their current state they are not capable of replacing most knowledge workers. Remember, even computers as a whole didn't measurably impact the economy for years after their adoption. The real world is a messy place and hard to predict!
Which measure? Like when folk say something is more "efficient" it's more time-efficient to fly but one trades other efficiency. Efficiency, like productivity needs a second word with it to properly communicate.
Whtys more productive? Lines of code (a weak measure). Features shipped? Bugs fixed? Time by company saved? Time for client? Shareholders value (lame).
I don't know the answer but this year (2026) I'm gonna see if LLM is better at tax prep than my 10yr CPA. So that test is my time vs $6k USD.
Time could be very expensive as mistakes on taxes can be fraud resulting in prison time. Mostly they understand people make mistakes - but they need to look like honest mistakes and llm may not. remember you sign your taxes as correct to the best of your knowledge - your CPA is admitting you outsourced understanding to an expert, something they accept. However if you sign alone you are saying you understand it all even if you don't.
These days productivity at a macroeconomic scale is usually cited in something like GDP per hour worked.
Most recent BLS for the last quarter ‘25 was an annualized rate of 5.4%.
The historic annual average is around 2%.
It’s a bit early to draw a conclusion from this. Also it’s not an absolute measure. GDP per hour worked. So, to cut through any proxy factors or intermediating signals you’d really need to know how many hours were worked, which I don’t have to hand.
That said, in general macro sense, assuming hours worked does not decrease, productivity +% and gdp +% are two of the fundamental factors required for real world wage gains.
If you’re looking for signals in either direction on AI’s influence on the economy, these are #s to watch, among others. The Federal Reserve, the the Chair reports after each meeting, is (IMO) one of the most convenient places to get very fresh hard #s combined with cogent analysis and usually some q&a from the business press asking questions that are at least some of the ones I’d want to ask.
If you follow these fairly accessible speeches after meetings, you’ll occasionally see how lots of the things in them end up being thematic in lots of the stories that pop up here weeks or months later.
Economy-wide productivity can be measured reasonably well, although there are a few different measures [1]. The big question I guess is whether AI will make a measurable impact there. Historically tech has had less impact than people thought it would, as noted in Robert Solow's classic quip that "You can see the computer age everywhere but in the productivity statistics". [2]
Number of features shipped. Traction metrics. Revenue per product. Ultimately business metrics. For example, tax prep effectiveness would be a proper experiment tied to specific metrics.
I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.
It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.
I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
Well, the predictions are tied to the timelines. If someone predicts that AI will take over writing code sometime in the future I think a lot of people would agree. The pushback comes from suggesting it's current LLMs and that the timeline is months and not decades.
> I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
It comes from the company best equipped with capital and infra.
If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.
This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.
I think for a lot of folks it basically comes down to just using AI to make the tasks they have to do easier and to free up time for themselves.
I’d argue the majority use AI this way. The minority “10x” workers who are using it to churn through more tasks are the motivated ones driving real business value being added - but let’s be honest, in a soulless enterprise 9-5 these folks are few and far between.
No. They're firing high paid seniors and replacing them with low pay juniors. This is IBM we're talking about.
The "limits of AI" bit is just smokescreen.
Firing seniors:
> Just a week after his comments, however, IBM announced it would cut thousands of workers by the end of the year as it shifts focus to high-growth software and AI areas. A company spokesperson told Fortune at the time that the round of layoffs would impact a relatively low single-digit percentage of the company’s global workforce, and when combined with new hiring, would leave IBM’s U.S. headcount roughly flat.
New workers will use AI:
> While she admitted that many of the responsibilities that previously defined entry-level jobs can now be automated, IBM has since rewritten its roles across sectors to account for AI fluency. For example, software engineers will spend less time on routine coding—and more on interacting with customers, and HR staffers will work more on intervening with chatbots, rather than having to answer every question.
Where does it say those cuts were senior software developers?
Obviously they want new workers to use AI but I don't really see anything to suggest they're so successful with AI that they're firing all their seniors and hiring juniors to be meatbags for LLMs.
“AI will steal your job” never made sense. If your company is doing bad, sure maybe you fire people after automating their job. But we’re in a growth oriented economic system. If the company is doing good, and AI increases productivity, you actually will hire more people because every person is that much more of a return on investment
As a senior engineer sometimes the system shows I did nothing because I was helping others. sometimes I get the really hard problem -'the isn't speller teh' type bugs are more common than thread race conditions - but a lot faster to solve.
No one has built business AI that is flat correct to the standards of a high redundancy human organization.
Individuals make mistakes in air traffic control towers, but as a cumulative outcome it's a scandal if airplanes collide midair. Even in contested airspace.
The current infrastructure never gets there. There is no improvement path from MCP to air traffic control.
I'm with you. I own a business and have created multiple tools for myself that collectively save me hours every month. What were boring, tedious tasks now just get done. I understand that the large-scale economic data are much less clear about productivity benefits, in my individual case they could not be more apparent.
The only thing the comments told me is that people lake judgement and taste to do it themselves. It's not hard, identify a problem that's niche enough for a problem you can solve.
Every hype AI post is like this. “I’m making $$$ with these tools and you’re ngmi”
I completely understand the joys of a few good months but this is the same as the people working two fang jobs at the start of Covid. Illusionary and not sustainable.
I'm not doubting of you or anything, but you just proved point above by saying you have a successful project without even mentioning which project is that.
Perhaps I'm being cynical, but could they be leaving out some detail? Perhaps they're replacing even more older workers with entry level workers than before? Maybe the AI makes the entry level workers just as good-- and much cheaper.
> In the HR department, entry-level staffers now spend time intervening when HR chatbots fall short, correcting output and talking to managers as needed, rather than fielding every question themselves.
The job is essentially changing from "You have to know what to say, and say it" to "make sure the AI says what you know to be right"
I always though the usual 'they only hire seniors now' was a questionable take. If anything, all you need is a semi warm blooded human to hit retry until the agents get something functional. It's more likely tech will transform into an industry of lowly paid juniors imho, if it hasn't already started. Senior level skill is more replacable, not just because it's cheaper to hire juniors augmented with mostly AI but because they are more adaptable to the new dystopia since they never experienced anything else. They are less likely to get hung up on some code not being 'best practice' or 'efficient' or even 'correct'. They will just want to get the app working regardless of what goes in the sausage, etc.
Exactly, that's why counting job postings is a terrible proxy for gauging market conditions. Companies may hire anywhere from 0 to 100s of people through the same JD.
The article said they called for triple junior hire but cut 1000 jobs a month later, “so the number of jobs stay roughly the same”.
Certainly they didn’t mean 1000 junior positions were cut. So what they really want to say is that they cut senior positions as a way of saving cost/make profit in the age of AI? Totally contrary to what other companies believe? Sounds quite insane to me!
The title could be dead wrong; the tripling of junior jobs might not be due to the limits of AI, but because of AI increasing the productivity of juniors to that of a mid or senior (or at least 2-3x-ing the output of juniors), thus making hiring juniors an appealing prospect to increase the company's output relative to competitors who aren't hiring in response to AI tech improvements. Hope this is the case and hope it happens across broadly across the economy. While the gutter press fear mongers of job losses, if AI makes the average employee much more useful (even if its via newly created roles), it's conceivable there's a jobs/salaries boom, including among those who 'lose their job' and move into a new one!
IBM is one of those companies that measures success by complexity. Meaning if it's complicated, they make money with consultants. If it's simple, they bundle it with other complex solutions that require consulting.
I had the chance to try a IBM internal AI. It was a normal chat interface where one could select models up to Sonnet 4.5. I have not seen anything agentic. So there is that.
Not because it's wrong, but because it risks initiating the collapse of the AI bubble and the whole "AI is gonna replace all skilled work, any day now, just give us another billion".
To a non-technical individual IBM is still seen as a reputable brand (their consulting business would've been bankrupt long ago otherwise) and they will absolutely pay attention.
Agree, They could have owned the home computer market, but were out-manvoured by a couple of young programmers. They are hardly the company you want to look to for guidance on the future.
Doubt it. Unless we go through another decade of ZIRP tied to a newly invented hyped technology that lacks specialists, and discovering new untapped markets, there's not gonna be any massive demand spike of junior labor in tech that can't be met causing wages to shoot up.
The "learn to code" saga has run its course. Coder is the new factory worker job where I live, a commodity.
And those people probably aren’t developers by trade, just power users who superficially understand the moving parts but who cannot write code themselves.
Technologies entire job is to make it less work to accomplish something and therefore easier and cheaper. In some cases that will make it possible to do things you couldn't do before but in many cases it'll just end up causing the value of said labor to fall. The problem isn't change, but the rate of change and the fact it's affecting our own field rather than someone else's.
They hire juniors, give them Claude Code and some specs and save a mid/senior devs salary. I believe coding is over for SWE's by end of 2027, but will take time to diffuse though the economy hence still need some cheap labour for a few years, given the H1-B ban this is one way without offshoring.
IBM has practiced ageism for decades with the same playbook. AI is just the latest excuse. Fire a wide enough swath so it isn’t all old employees and then only hire entry level positions. Often within the same year. Repeat.
An AI model has no drive or desire, or embodiment for that matter. Simply put, they don't exist in the real world and don't have the requirements or urgency to do anything unless prompted by a human, because, you know, survival under capitalism. Until they have to survive and compete like the rest of us and face the same pressures, they are going to be forever be relegated as mere tools.
Tbh, getting good results from ai requires senior level intuition. You can be rusty as hell and not even middling in the language being used, but you have to understand data structures and architecture more than ever to get non-shit results. If you just vibe it, you’ll eventually end up with a mountain of crap that works sort of, and since you’re not doing the coding, you can’t really figure it out as you go along. Sometimes it can work to naively make a thing and then have it rewritten from scratch properly though, so that might be the path.
100% accurate. The architect matters so much more than people think. The most common counter argument to this I've seen on reddit are the vibe coders (particularly inside v0 and lovable subreddits) claiming they built an app that makes $x0,000 over a weekend, so who needs (senior) software engineers and the like?
A few weeks later, there's almost always a listing for a technical co-founder or a CTO with experience on their careers page or LinkedIn :)))
just to be clear: from my standpoint it's the worst period ever being a junior in tech, you are not "fucked" if you are junior, but hard times are ahead of you.
IMO with the latest generation (gpt codex 5.3 and claude 4.6) most devs could probably be replaced by AI. They can do stuff that I've seen senior devs fail at. When I have a question about a co-workers project, I no longer ask them and instead immediately let copilot have a look at the repo and it will be faster and more accurate at identifying the root cause of issues than humans who actually worked on the project. I've yet to find a scenario where they fail. I'm sure there are still edge cases, but I'm starting to doubt humans will matter in them for long. At this point we really just need better harnesses for these models, but in terms of capabilities they may as well take over now.
Why is that bad? You write better code when you actually understand the business domain and the requirement. It's much easier to understand it when you get it direct from the source than filtered down through dozens of product managers and JIRA tickets.
Having had to support many of these systems for sales or automation or video production pipelines as soon as you dig under the covers you realize they are a hot mess of amateur code that _barely_ functions as long as you don't breath on it too hard.
Software engineering is in an entirely nascent stage. That the industry could even put forward ideas like "move fast and break things" is extreme evidence of this. We know how to handle this challenge of deep technical knowledge interfacing with domain specific knowledge in almost every other industry. Coders were once cowboys, now we're in the Upton Sinclair version of the industry, and soon we'll enter into regular honest professional engineering like every other new technology ultimately has.
Not sure why this is being downvoted. It’s spot on imo. Engineers who don’t want to understand the domain and the customers won’t be as effective in an engineering organization as those who do.
It always baffles me when someone wants to only think about the code as if it exists in a vacuum. (Although for junior engineers it’s a bit more acceptable than for senior engineers).
Customer interaction has imo always been one of the most important parts in good engineering organizations. Delegating that to Product Managers adds unnecessary friction.
Having spent more hours than I care to count struggling to control my facial expressions in client-facing meetings your assertion that that friction is unnecessary is highly questionable. Having a "face man" who's sufficiently tech literate to ask decent questions manage the soft side of client relations frees up a ton of engineering resources that would otherwise be squandered replying to routine emails.
> The "AI will replace all junior devs" narrative never accounted for the fact that you still need humans who understand the business domain, can ask the right questions, and can catch when the AI is confidently wrong.
You work with junior devs that have those abilities? Because I certainly don't.
I pay $20 for OpenAI and codex makes me incredibly productive. With very careful prompts aimed at tiny tasks, I can review, fix and get a lot of things done.
I’ll happily pay up to $2k/month for it if I was left with no choice, but I don’t think it will ever get that expensive since you can run models locally and it could have the same result.
That being said, my outputs are similarish in the big picture. When I get something done, I typically don’t have the energy to keep going to get it to 2x or 3x because the cognitive load is about the same.
However I get a lot of time freed up which is amazing because I’m able to play golf 3-4 times a week which would have been impossible without AI.
Productive? Yes. Time saved? Yes. Overall outputs? Similar.
I would like to know what models people are running locally that get the same results as a $20/month ChatGPT plan
Same? Not quite as good as that. But google’s Gemma 3 27B is highly similar to their last Flash model. The latest Qwen3 variants are very good, to my need at least they are the best open coders, but really— here’s the thing:
There’s so many varieties, specialized to different tasks or simply different in performance.
Maybe we’ll get to a one-size fits all at some point, but for now trying out a few can pay off. It also starts to build a better sense of the ecosystem as a whole.
For running them: if you have an Nvidia GPU w/ 8GB of vram you’re probably able to run a bunch— quantized. It gets a bit esoteric when you start getting into quantization varieties but generally speaking you should find out the sort of integer & float math your gpu has optimized support for and then choose the largest quantized model that corresponds to support and still fits in vram. Most often that’s what will perform the best in both speed and quality, unless you need to run more than 1 model at a time.
To give you a reference point on model choice, performance, gpu, etc: one of my systems runs with an nvidia 4080 w/ 16GB VRAM. Using Qwen 3 Coder 30B, heavily quantized, I can get about 60 tokens per second.
3 replies →
The run at home was in the context of $2k/mo. At that price you can get your money back on self-hosted hardware at a much more reasonable pace compared to 20/mo (or even 200).
Well theres an open source GPT model you can run locally. I dont think running models locally is all that cheap considering top of the line GPUs used to be $300 now you are lucky if you get the best GPU for under $2000. The better models require a lot more VRAM. Macs can run them pretty decently but now you are spending $5000 plus you could have just bought a rig with a 5090 with mediocre desktop ram because Sam Altman has ruined the RAM pricing market.
2 replies →
I got some decent mileage out of aider and Gemma 27B. The one shot output was a little less good, but I don’t have to worry about paying per token or hitting plan limits so I felt more free to let it devise a plan, run it in a loop, etc.
Not having to worry about token limits is surprisingly cognitively freeing. I don’t have to worry about having a perfect prompt.
And what hardware they needed to run the model, because that's the real pinch in local inference.
There are no models that you can run locally that'll match a frontier LLM
Marx in his wildest nightmare couldn’t have anticipated the level selling short the working class does with the advent AI. Friend, you should be doing more than golf…
Bro, nobody wants to hear about the hustle anymore. We're in the second half of this decade now.
> nobody wants to hear about the hustle anymore
Plenty of people are still ambitious and being successful.
The title is a bit misleading. Reading the article, the argument seems to be that entry-level applicants (are expected to) have the highest AI literacy, so they want them to drive AI adoption.
At least today, I expect this will fail horribly. The challenge today isn't AI literacy in my experience, its domain knowledge required to keep LLMs on the rails.
People literate in AI, but inexperienced in all other facts. What could go wrong!
2 replies →
Sounds like the first step of a galactic scale fuck up
"Galactic scale" and "Fuck Up" are on brand for IBM.
It is IBM after all
1 reply →
dotcom implosion redux
5 replies →
I hope they have a good 10 years experience in that "literacy".
I just run sub agents in parallel. Yesterday I used Codex for the first time yesterday. I spun up 350,640 agents and got 10 years of experience in 15 minutes.
5 replies →
25 years of LLM experience for a mid-level
"AI is going to wipe out junior developers!"
They actually hire more junior developers
"Uhh .. to adopt AI better they're hiring more junior developers!"
This cope is especially low quality with the context that this is just another purge of older workers at IBM.
Some stats are trickling out in my company. Code heavy consulting projects show about 18% efficiency gains but I have problems with that number because no one has been able to tell me how it was calculated. Story points actual vs estimated is probably how it was done but that’s nonsensical because we all know how subjective estimates and even actuals are. It’s probably impossible to get a real number that doesn’t have significant “well I feel about x% more efficient…”
More interesting imo would be a measure of maintainability. I've heard that code that's largely written by AI is rarely remembered by the engineer that submitted even a week after merging
You're almost "locked in" to using more AI on top of it then. It may also make it harder to give estimates to non-technical staff on how long it'd take to make a change or implement a new feature
I don’t know how to measure maintainability but the AI generated code I’ve seen in my projects is pretty plain vanilla standard patterns with comments. So less of a headache than a LOT of human code I’ve seen. Also, one thing the agents are good at, at least in my experience so far, is documenting existing code. This goes a long ways in maintenance, it’s not always perfect but as the saying goes documentation is like sex, when it’s good it’s great when it’s bad it’s better than nothing.
8 replies →
chasd00 did mention that this was for consulting projects, where presumably there's a handover to another team after a period of time. Maintainability was never a high priority for consultants.
But in general I agree with your point.
> engineer that submitted it
This is a poor metric as soon as you reach a scale where you've hired an additional engineer, where 10% annual employee turnover reflects > 1 employee, much less the scale where a layoff is possible.
It's also only a hope as soon as you have dependencies that you don't directly manage like community libraries.
[dead]
Hint: Make sure the people giving you the efficiency improvement numbers don't have a vested interest in giving you good numbers. If so, you can not trust the numbers.
Reminds me of my last job where the team that pushed React Native into the codebase were the ones providing the metrics for "how well" React Native was going. Ain't no chance they'd ever provide bad numbers.
better than lines of code at least!
Is this for their in-house development or for their consulting services?
Because the latter would still be indicative of AI hurting entry level hiring since it may signal that other firms are not really willing to hire a full time entry level employee whose job may be obsoleted by AI, and paying for a consultant from IBM may be a lower risk alternative in case AI doesn't pan out.
And if it is for consulting, I doubt very serious they will based in the US. You can’t be priced competitive hiring an entry level consultant in the US and no company is willing to pay the bill rate for US based entry level consultants unless their email address is @amazon.com or @google.com.
Source: current (full time) staff consultant at a third party cloud consulting firm and former consultant (full time) at Amazon.
Why would Amazon bring on a full-time consultant instead of just hiring you?
4 replies →
One might ask what value seniors hold if their expertise of the junior stage is obsolete. Maybe the new junior will just be reigning in llm that does the work and senior level knowledge and compensation rots away as those people retire without replacement.
Huh?
7 replies →
[dead]
Interesting given the current age discrimination lawsuit:
https://www.cohenmilstein.com/case-study/ibm-age-discriminat...
Another one? What is it with IBM, they must really save lots of money in a way no one else has figured out by firing people at 50yo. This is like the 3rd or 4th one i've heard from them.
It’s not very hard. Take a guy making $200k and 30% benefit overhead and replace with two offshore people at $50k total comp.
2 replies →
No - it's that they fired their vets in high cost areas and kept them in low cost areas.
A large number of vets can now choose to reapply for their old job (or similar job) at a fraction of the price with their pension/benefits reduced and the vets in low cost centers now become the SMEs. In many places in the company they were not taken seriously due to both internal politics, but also quite a bit of performative "output" that either didn't do anything or had to be redone.
Nothing to do with AI - everything to do with Arvind Krishna. One of the reasons the market loves him, but the tech community doesn't necessarily take IBM seriously.
IBM has cut ~8,000 jobs in the past year or so.
Sounds like business as usual to me, with a little sensationalization.
I realized the AI replacing developers hype was all hype after watching this.
Why Replacing Developers with AI is Going Horribly Wrong https://m.youtube.com/watch?v=WfjGZCuxl-U&pp=ygUvV2h5IHJlcGx...
A bunch of big companies took big bets on this hype and got burned badly.
You know when someone is singing the praises about AI and they get asked "if you're so much more productive with AI, what have you built with it"? Well I think a bunch of companies are asking this same question to their employees and realising that the productivity gains they are betting on were overhyped.
LLM's can be a very useful tool and will probably lead to measurable productivity increases in the future, at their current state they are not capable of replacing most knowledge workers. Remember, even computers as a whole didn't measurably impact the economy for years after their adoption. The real world is a messy place and hard to predict!
> measurable productivity
Which measure? Like when folk say something is more "efficient" it's more time-efficient to fly but one trades other efficiency. Efficiency, like productivity needs a second word with it to properly communicate.
Whtys more productive? Lines of code (a weak measure). Features shipped? Bugs fixed? Time by company saved? Time for client? Shareholders value (lame).
I don't know the answer but this year (2026) I'm gonna see if LLM is better at tax prep than my 10yr CPA. So that test is my time vs $6k USD.
Time could be very expensive as mistakes on taxes can be fraud resulting in prison time. Mostly they understand people make mistakes - but they need to look like honest mistakes and llm may not. remember you sign your taxes as correct to the best of your knowledge - your CPA is admitting you outsourced understanding to an expert, something they accept. However if you sign alone you are saying you understand it all even if you don't.
These days productivity at a macroeconomic scale is usually cited in something like GDP per hour worked.
Most recent BLS for the last quarter ‘25 was an annualized rate of 5.4%.
The historic annual average is around 2%.
It’s a bit early to draw a conclusion from this. Also it’s not an absolute measure. GDP per hour worked. So, to cut through any proxy factors or intermediating signals you’d really need to know how many hours were worked, which I don’t have to hand.
That said, in general macro sense, assuming hours worked does not decrease, productivity +% and gdp +% are two of the fundamental factors required for real world wage gains.
If you’re looking for signals in either direction on AI’s influence on the economy, these are #s to watch, among others. The Federal Reserve, the the Chair reports after each meeting, is (IMO) one of the most convenient places to get very fresh hard #s combined with cogent analysis and usually some q&a from the business press asking questions that are at least some of the ones I’d want to ask.
If you follow these fairly accessible speeches after meetings, you’ll occasionally see how lots of the things in them end up being thematic in lots of the stories that pop up here weeks or months later.
1 reply →
Economy-wide productivity can be measured reasonably well, although there are a few different measures [1]. The big question I guess is whether AI will make a measurable impact there. Historically tech has had less impact than people thought it would, as noted in Robert Solow's classic quip that "You can see the computer age everywhere but in the productivity statistics". [2]
[1] https://www.oecd.org/en/topics/sub-issues/measuring-producti...
[2] https://en.wikipedia.org/wiki/Productivity_paradox
Try agent zero, you can then upload your bank ( or credit card) statements in CSV etc. It then can analyse it
Number of features shipped. Traction metrics. Revenue per product. Ultimately business metrics. For example, tax prep effectiveness would be a proper experiment tied to specific metrics.
[dead]
I used to write bugs in 8 hours. Now I write the same bugs in 4. My Productivity doubled. \s
22 replies →
I bet you the predictions are largely correct but technology doesn't care about funding timelines and egos. It will come in its own time.
It's like trying to make fusion happen only by spending more money. It helps but it doesn't fundamentally solve thr pace of true innovation.
I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
Well, the predictions are tied to the timelines. If someone predicts that AI will take over writing code sometime in the future I think a lot of people would agree. The pushback comes from suggesting it's current LLMs and that the timeline is months and not decades.
> I've been saying for years now that the next AI breakthrough could come from big tech but it also has just a likely chance of comming from a smart kid with a whiteboard.
It comes from the company best equipped with capital and infra.
If some university invents a new approach, one of the nimble hyperscalers / foundation model companies will gobble it up.
This is why capital is being spent. That is the only thing that matters: positioning to take advantage of the adoption curve.
2 replies →
[dead]
I think for a lot of folks it basically comes down to just using AI to make the tasks they have to do easier and to free up time for themselves.
I’d argue the majority use AI this way. The minority “10x” workers who are using it to churn through more tasks are the motivated ones driving real business value being added - but let’s be honest, in a soulless enterprise 9-5 these folks are few and far between.
Sure but why haven’t you seen a drastic increase in single person startups.
Why are there fewer games launched in steam this January than last?
24 replies →
[dead]
No. They're firing high paid seniors and replacing them with low pay juniors. This is IBM we're talking about.
The "limits of AI" bit is just smokescreen.
Firing seniors:
> Just a week after his comments, however, IBM announced it would cut thousands of workers by the end of the year as it shifts focus to high-growth software and AI areas. A company spokesperson told Fortune at the time that the round of layoffs would impact a relatively low single-digit percentage of the company’s global workforce, and when combined with new hiring, would leave IBM’s U.S. headcount roughly flat.
New workers will use AI:
> While she admitted that many of the responsibilities that previously defined entry-level jobs can now be automated, IBM has since rewritten its roles across sectors to account for AI fluency. For example, software engineers will spend less time on routine coding—and more on interacting with customers, and HR staffers will work more on intervening with chatbots, rather than having to answer every question.
Where does it say those cuts were senior software developers?
Obviously they want new workers to use AI but I don't really see anything to suggest they're so successful with AI that they're firing all their seniors and hiring juniors to be meatbags for LLMs.
5 replies →
Meh, i think a lot of companies just wanted an excuse to do lay-offs without the bad press, and AI was convinent.
“AI will steal your job” never made sense. If your company is doing bad, sure maybe you fire people after automating their job. But we’re in a growth oriented economic system. If the company is doing good, and AI increases productivity, you actually will hire more people because every person is that much more of a return on investment
> "if you're so much more productive with AI, what have you built with it"
If my boss asked me a question like this my reply would be "exactly what you told me to build, check jira".
If you want to know if I'm more productive - look at the metrics. Isn't that what you pay Atlassian for? Maybe you could ask their AI...
As a senior engineer sometimes the system shows I did nothing because I was helping others. sometimes I get the really hard problem -'the isn't speller teh' type bugs are more common than thread race conditions - but a lot faster to solve.
1 reply →
No one has built business AI that is flat correct to the standards of a high redundancy human organization.
Individuals make mistakes in air traffic control towers, but as a cumulative outcome it's a scandal if airplanes collide midair. Even in contested airspace.
The current infrastructure never gets there. There is no improvement path from MCP to air traffic control.
It's hard work and patience and math.
[flagged]
Everytime someone say something like that there is no link to the product. Maybe because it doesn't exist ?
10 replies →
Sounds nice, for how many years have you had that annual recurring revenue so far?
1 reply →
I'm with you. I own a business and have created multiple tools for myself that collectively save me hours every month. What were boring, tedious tasks now just get done. I understand that the large-scale economic data are much less clear about productivity benefits, in my individual case they could not be more apparent.
2 replies →
Has anyone noticed Amazon or AWS shipping features faster than their pre-GenAI baseline? I haven't
1 reply →
The only thing the comments told me is that people lake judgement and taste to do it themselves. It's not hard, identify a problem that's niche enough for a problem you can solve.
Stop arguing on HN and get to building.
Every hype AI post is like this. “I’m making $$$ with these tools and you’re ngmi” I completely understand the joys of a few good months but this is the same as the people working two fang jobs at the start of Covid. Illusionary and not sustainable.
2 replies →
I'm not doubting of you or anything, but you just proved point above by saying you have a successful project without even mentioning which project is that.
1 reply →
Cool! Can we see it?
1 reply →
Nice, yeah I feel like there's a big opportunity for tech workers who are product-adjacent to use LLMs to get up to speed building SaaS etc.
Are you worried by any of those claims about SaaS being dead because of AI? lol
5 replies →
Details would help your argument. Since many did the same thing, before the AI wave...
Is the business 3 months old now?
5 replies →
Perhaps I'm being cynical, but could they be leaving out some detail? Perhaps they're replacing even more older workers with entry level workers than before? Maybe the AI makes the entry level workers just as good-- and much cheaper.
https://archive.today/D6Kyc
Yes, junior candidates lacking the knowledge and wisdom to redirect an LLM, that's who will unlock the mythical AI productivity.
> In the HR department, entry-level staffers now spend time intervening when HR chatbots fall short, correcting output and talking to managers as needed, rather than fielding every question themselves.
The job is essentially changing from "You have to know what to say, and say it" to "make sure the AI says what you know to be right"
I always though the usual 'they only hire seniors now' was a questionable take. If anything, all you need is a semi warm blooded human to hit retry until the agents get something functional. It's more likely tech will transform into an industry of lowly paid juniors imho, if it hasn't already started. Senior level skill is more replacable, not just because it's cheaper to hire juniors augmented with mostly AI but because they are more adaptable to the new dystopia since they never experienced anything else. They are less likely to get hung up on some code not being 'best practice' or 'efficient' or even 'correct'. They will just want to get the app working regardless of what goes in the sausage, etc.
Probably not on the IBM jobs site yet, where the number of entry level jobs is low compared to the size of the company (~250k):
https://www.ibm.com/careers/search?field_keyword_18[0]=Entry...
Total: 240
United States: 25
India: 29
Canada: 15
Aren't those general jobs opening. Like junior swe only needs a single generic posting for all positions
Exactly, that's why counting job postings is a terrible proxy for gauging market conditions. Companies may hire anywhere from 0 to 100s of people through the same JD.
[dead]
The article said they called for triple junior hire but cut 1000 jobs a month later, “so the number of jobs stay roughly the same”.
Certainly they didn’t mean 1000 junior positions were cut. So what they really want to say is that they cut senior positions as a way of saving cost/make profit in the age of AI? Totally contrary to what other companies believe? Sounds quite insane to me!
The title could be dead wrong; the tripling of junior jobs might not be due to the limits of AI, but because of AI increasing the productivity of juniors to that of a mid or senior (or at least 2-3x-ing the output of juniors), thus making hiring juniors an appealing prospect to increase the company's output relative to competitors who aren't hiring in response to AI tech improvements. Hope this is the case and hope it happens across broadly across the economy. While the gutter press fear mongers of job losses, if AI makes the average employee much more useful (even if its via newly created roles), it's conceivable there's a jobs/salaries boom, including among those who 'lose their job' and move into a new one!
IBM is one of those companies that measures success by complexity. Meaning if it's complicated, they make money with consultants. If it's simple, they bundle it with other complex solutions that require consulting.
I had the chance to try a IBM internal AI. It was a normal chat interface where one could select models up to Sonnet 4.5. I have not seen anything agentic. So there is that.
Brings a new angle on the old joke: "Actually, Indians"
Bold move.
Not because it's wrong, but because it risks initiating the collapse of the AI bubble and the whole "AI is gonna replace all skilled work, any day now, just give us another billion".
Seems like IBM can no longer wait for that day.
Is IBM invested big in LLMs? I don't get the impression they have much to lose there.
They said they're going to invest like $150B over five years. Which is quite a bit smaller than other big tech firms.
They have their Granite family of models, but they're small language models so surely significantly less resources are going into them.
Their CEO already said what he's thinking about all the spending [0].
[0]: https://news.ycombinator.com/item?id=46124324
Good. Nobody needs to rip that bandaid off. Might as well be IBM.
I mean it’s IBM. On average, 70% of their decisions are bad ones. Not sure I’d pay a single bit of attention to what they do.
To a non-technical individual IBM is still seen as a reputable brand (their consulting business would've been bankrupt long ago otherwise) and they will absolutely pay attention.
Yeah, they are only 114 years old. How they can have the knowledge to stay afloat in trying times like this?
Agree, They could have owned the home computer market, but were out-manvoured by a couple of young programmers. They are hardly the company you want to look to for guidance on the future.
Tripling entry-level hiring is a good plan.
> Some executives and economists argue that younger workers are a better investment for companies in the midst of technological upheaval.
IBM, in the midst of a tech upheaval? They are so dysfunctional, it's the core of why I left
With the workforce may happen like with DRAM and NAND flash memories: unexpected demand in one side leaving without enough offer in other sides.
Doubt it. Unless we go through another decade of ZIRP tied to a newly invented hyped technology that lacks specialists, and discovering new untapped markets, there's not gonna be any massive demand spike of junior labor in tech that can't be met causing wages to shoot up.
The "learn to code" saga has run its course. Coder is the new factory worker job where I live, a commodity.
When you read the comments here just remember there are people using ChatGPT to write code.
And those people probably aren’t developers by trade, just power users who superficially understand the moving parts but who cannot write code themselves.
Huh, weird, another "technological marvel" whose primary effect just seems to be devaluing labour.
Technologies entire job is to make it less work to accomplish something and therefore easier and cheaper. In some cases that will make it possible to do things you couldn't do before but in many cases it'll just end up causing the value of said labor to fall. The problem isn't change, but the rate of change and the fact it's affecting our own field rather than someone else's.
They hire juniors, give them Claude Code and some specs and save a mid/senior devs salary. I believe coding is over for SWE's by end of 2027, but will take time to diffuse though the economy hence still need some cheap labour for a few years, given the H1-B ban this is one way without offshoring.
If you had a truly thorough QA department, you might get away with that. Sadly, trashing QA is everyone’s second favorite new fad.
I want the big_model take.
These are just the draft tokens.
We are witnessing the Secularization of Code.
IBM has practiced ageism for decades with the same playbook. AI is just the latest excuse. Fire a wide enough swath so it isn’t all old employees and then only hire entry level positions. Often within the same year. Repeat.
AI is not removing entry-level roles — it’s exposing where judgment boundaries actually exist.
What does tripling actually mean in this context?
E.g. If you cut hiring from say 1,000 a year to 10 and now are 'tripling' it to 30 then that's still a nothingburger.
Nooooo how dare you!!! AGI is coming and engineers are obsolete!
Think about the economy and the AI children
An AI model has no drive or desire, or embodiment for that matter. Simply put, they don't exist in the real world and don't have the requirements or urgency to do anything unless prompted by a human, because, you know, survival under capitalism. Until they have to survive and compete like the rest of us and face the same pressures, they are going to be forever be relegated as mere tools.
[dead]
[dead]
[flagged]
[dupe] Earlier: https://news.ycombinator.com/item?id=46995146
Thanks - we-ve merged that thread hither.
It must be refactored: IBM is hopping that juniors(less paid) with AI can be sold as seniors.
Tbh, getting good results from ai requires senior level intuition. You can be rusty as hell and not even middling in the language being used, but you have to understand data structures and architecture more than ever to get non-shit results. If you just vibe it, you’ll eventually end up with a mountain of crap that works sort of, and since you’re not doing the coding, you can’t really figure it out as you go along. Sometimes it can work to naively make a thing and then have it rewritten from scratch properly though, so that might be the path.
100% accurate. The architect matters so much more than people think. The most common counter argument to this I've seen on reddit are the vibe coders (particularly inside v0 and lovable subreddits) claiming they built an app that makes $x0,000 over a weekend, so who needs (senior) software engineers and the like? A few weeks later, there's almost always a listing for a technical co-founder or a CTO with experience on their careers page or LinkedIn :)))
2 replies →
This mirrors my experience exactly. Vibe coding straight up does not work for any serious code.
6 replies →
Still a wildly different thesis than the “juniors are fucked, ladder’s been raised”
just to be clear: from my standpoint it's the worst period ever being a junior in tech, you are not "fucked" if you are junior, but hard times are ahead of you.
5 replies →
IMO I have found that juniors working with AI is basically just like subscribing to an expensive AI agent.
IMO with the latest generation (gpt codex 5.3 and claude 4.6) most devs could probably be replaced by AI. They can do stuff that I've seen senior devs fail at. When I have a question about a co-workers project, I no longer ask them and instead immediately let copilot have a look at the repo and it will be faster and more accurate at identifying the root cause of issues than humans who actually worked on the project. I've yet to find a scenario where they fail. I'm sure there are still edge cases, but I'm starting to doubt humans will matter in them for long. At this point we really just need better harnesses for these models, but in terms of capabilities they may as well take over now.
4 replies →
[flagged]
ehm ... it's basically what all big consultancies have been doing in the last 20 years .. and they made tons of money with this model.
1 reply →
"software engineers will spend less time on routine coding—and more on interacting with customers"
Ahh, what could possibly go wrong!
Why is that bad? You write better code when you actually understand the business domain and the requirement. It's much easier to understand it when you get it direct from the source than filtered down through dozens of product managers and JIRA tickets.
You write more efficient software for the task.
Having had to support many of these systems for sales or automation or video production pipelines as soon as you dig under the covers you realize they are a hot mess of amateur code that _barely_ functions as long as you don't breath on it too hard.
Software engineering is in an entirely nascent stage. That the industry could even put forward ideas like "move fast and break things" is extreme evidence of this. We know how to handle this challenge of deep technical knowledge interfacing with domain specific knowledge in almost every other industry. Coders were once cowboys, now we're in the Upton Sinclair version of the industry, and soon we'll enter into regular honest professional engineering like every other new technology ultimately has.
Engineers and customers often talk past each other. They focus on different things. They use different vocabulary.
1 reply →
Not sure why this is being downvoted. It’s spot on imo. Engineers who don’t want to understand the domain and the customers won’t be as effective in an engineering organization as those who do.
It always baffles me when someone wants to only think about the code as if it exists in a vacuum. (Although for junior engineers it’s a bit more acceptable than for senior engineers).
16 replies →
Programmers have an unfortunate tendancy to be too honest!
Customer interaction has imo always been one of the most important parts in good engineering organizations. Delegating that to Product Managers adds unnecessary friction.
Having spent more hours than I care to count struggling to control my facial expressions in client-facing meetings your assertion that that friction is unnecessary is highly questionable. Having a "face man" who's sufficiently tech literate to ask decent questions manage the soft side of client relations frees up a ton of engineering resources that would otherwise be squandered replying to routine emails.
I’m a people person.
https://www.youtube.com/watch?v=hNuu9CpdjIo
Sounds like we're finally doing agile.
[dead]
> The "AI will replace all junior devs" narrative never accounted for the fact that you still need humans who understand the business domain, can ask the right questions, and can catch when the AI is confidently wrong.
You work with junior devs that have those abilities? Because I certainly don't.
Not many, but junior devs grow into senior devs who do, which is the point. If there are no junior devs there is no one growing into those skill sets.
[flagged]