It's wild that there are as many jobs in the category "Top Executives" as in the category "Retail Sales Worker".
This makes sense given both automation and the US's role in the global economy, but it runs somewhat contrary to standard ideas of class and inequality.
That category has a median pay of $105,350, and includes "general and operations managers" as well as "chief executives". I assume it includes executives of very small enterprises.
Remember that exec tech salaries are extreme outliers. I worked for an exec in manufacturing. He had full p&l responsibility for a business segment with ~150 employees, $27 million in revenue at 40% gross margins, and a production plant. His total comp was ~$300k.
Now just think of the comp levels in sectors like government, education, etc.
If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
> If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
The 1% pockets, this is where the vast majority of the extra productivity computers/internet/automation brought goes to for the last 50 years: https://www.epi.org/productivity-pay-gap/
The study doesn't say it went into the 1%'s pockets. It says it went to 2 places:
1) The salaries of corporate employees
2) Shareholders and capital owners
Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.
And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
If AI being a million billion zillion times more productive at doing bullshit jobs nets in very little economic gain, then that lays bare the net economic value of all our bullshit jobs.
But given that the stock market hasn't panicked, this must mean at least one of these premises is false:
1. Economic activity is relatively flat.
2. AI makes us a million billion zillion times more productive than we used to be.
> lays bare the net economic value of all our bullshit jobs.
This was already obvious, the more important question is what are we (collectively, society & our governments) going to do about it?
We (should have) already known most of our jobs were bullshit jobs, especially white collar jobs. The difference is now we might have something coming that will eliminate the bullshit jobs.
But society will always need bullshit jobs or the whole system collapses. Not everyone can go dig ditches, so what do we do?
> In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
I think this is a very important point. The hedonic treadmill means real gains are discounted. The novelty information cycle is like an Osborn Effect for improvements, like the semi-annual Popular Mechanic's flying car covers where there is an enticing future perpetually nearly here and at the same time disappointingly never materialized.
I think it's gonna mirror how the white collar classes, coastal elites, professional managerial class, whatever you want to call them, sold the countries industrial base to the far east. They got a little bit of money out of it but the biggest gains were the material wealth. $1 widgets instead of $2 widgets. All the people who weren't hurt by it got to live with more material plenty. Of course the nominal values of things didn't go down, but that's just inflation which is somewhat separate of an effect.
This time the jobs most in the crosshairs of AI are the jobs that constituted the paper pushing overhead of modern society, all the paper pushing jobs. Instead of $1 widgets from China replacing $2 domestic widgets it's gonna be $1 AI services replacing $2 services that require a real human.
This is hard to reason about because people tend to consume these kinds of services in big multi hundred or multi thousand dollar increments but in practice what it means is that when you have to engage an accountant, engineer, having something planned out in accordance with some standard, that will be substantially cheaper because of the reduced professional labor component.
And of course, as usual, the string pulling and in investor class will get fabulously wealthy along the way.
Data is coming from BLS. Their data lags the true state of affairs, and their growth projections are never reliable. Remember when they touted from 2000-2010 that Actuaries are the hottest growing field with the best forward looking outlook?
BLS forward looking guidance means nothing when technology revolutionizes the nature of work.
lol i always wondered how actuary ever crossed the radar of my partner in college and this must have been it. hey they just finished up their FCAS cert and they are riding quite high and quite comfy. but it is for sure a very small pool of people just due to the immense work needed to get that point.
- Analyze users’ needs and then design and develop software to meet those needs
Recommend software upgrades for customers’ existing programs and systems
Design each piece of an application or system and plan how the pieces will work together
- Create a variety of models and diagrams showing programmers the software code needed for an application
- Ensure that a program continues to function normally through software maintenance and testing
- Document every aspect of an application or system as a reference for future maintenance and upgrades
Right, I'm a Computer Programmer but any job with that title is likely horrible. But having the title Software Engineer doesn't magically make me an engineer. All word games.
The BLS classifies them as different roles. In essence: Software developers plan, computer programmers implement. Which in many cases might be the same person, but it has always been true that one person can hold multiple jobs.
They're saying that programmers will be declining. While Developers, and crucially, Testers and QA people will be increasing. That testers and QA become more important in the future sounds plausible to me in a future hypothetical world of ubiquitous AI.
All of that doesn't necessarily imply that the Developer class of employees will grow at the same rate as the Tester and QA classes of employees.
Interestingly, it seems from these statistics the median wage for individuals with a Master's is lower than a Bachelor's. I wonder if that's because of immigrants who pursue higher education for visa reasons skewing the data.
Anecdotally, many people get a bachelor's degree to check a box for job applications, whereas many people get a master's degree because they love the field and/or are afraid to leave school.
My friends and I who have a bachelor's degree in CS make more money than my friends who have or are working towards master's degrees in CS, because the former are working in the private sector and the latter are in academia making peanuts.
Other possible reason could be many or most Masters degrees not conferring additional pricing power, and those people’s Bachelors degrees also confer lower pricing power.
Edit: Another possible reason that Masters degrees were less common in the past, so the Bachelors pay statistics skew towards people with more work experience in their higher earning years, whereas the Masters pay statistics skew towards younger people with less work experience.
Masters seems to be a common theme in a few lower paying expansive fields like social work and education. I don't think that someone with a masters is typically making less in the same field all else equal.
My takeaway here: 3.XT $ of US salaries are the TAM for AI companies.
Apple, a very successful company, makes 300B/y revenue? (ish)
~10% is all you need to be Apple.
And, it can work by taking all of 10% of the jobs and collecting the whole salary (the AI employee -- dubious proposition),
or by taking 10% of everyone's salary and automating part of everyone's job (the AI "tool" -- much more plausible).
If "part" being automated is >10%, we all win in the long run, every company gets productivity growth without cost growth, etc etc.
If you add in data center costs, and multiple competing AI companies, and then expand the TAM to all white collar work worldwide, you can make everyone successful beyond their wildest dreams with a "20% of work for 20% of the cost" model. Again, how you distribute that 20% remains to be seen (20% new unemployment, or new 0% unemployment with "tools".
The replace-work TAM is overstated because it fails to address transaction costs, which are astronomical when refactoring work and dislodging stakeholders with sunk costs. Coding is now the leading app for AI now because it had already been factored to support division of labor, outsourcing, and remote work.
It's also understated, because the real value of AI is not in replacing work, but making new products possible either because it's finally cheap enough to make them, or because -- AI.
Your math is missing the fact that Apple products are differentiated from their competitors. If AI becomes a ubiquitous commodity, it's not worth 300B/y.
The bosses already hate their workers and are mad that they have to pay them a cent. Would they really accept paying another 10% on their wages to make their workers 10% more productive? When there is significant active competition between the providers of core models and huge pressure to reduce prices?
Frequently seen as a big fun number in pitch decks. "The TAM for our new Coca-Cola killer is $1.6T: all humans who imbibe liquids on a regular basis. You simply MUST invest."
> You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
Are LLMs good at scoring? In my experience, using an LLM for scoring things usually produces arbitrary results. I'm surprised to see Karpathy employ it
The fact that the LLM appears to never assign an actual 0 or 10 makes me suspicious. Especially when the prompt includes explicit examples of what counts as a 10.
Are childcare and kindergarten teachers really exposed to AI? In theory, we could put a class of 30 children in front of chatbots with one supervisor. But I doubt we would chose to do this as a society. If office work becomes more automated, early childhood education is actually one area I'd expect to take up the Slack. I can't imagine a situation where we have millions of unemployed former office workers but we leave them idle and let our children waste away in front of screens.
Childcare and education requires a specific tolerance, mindset and passion to be effective though. I'd be curious how many previously-PMs or HR drones or email jockeys would be adequate (let alone thrive) in an environment where there are next-to-nonexistent budgets, and you're servicing literal babies and tiny children lol
On second thought, client service folks might do extremely well here!
In which theory? And if you can do anything in theory, then there is no justifiable "but" or any excuse. The only problem is your own ability to realize it or unexpected situation. A theory is a fact, a proven hypothesis, with all its parts such as formulas, laws, or a force as in the THEORY of gravitation. And no, you don't have one, and I assure you that you've never had a theory in your life.
There are a lot of education and curriculum companies pitching basically this- replace those 'expensive' teachers with aides making minimum wage as all they need to do is recite curriculum and help them log in to be evaluated.
small business is the majority of employment. Think of an indi-coffee shop, the person taking your order may very well be the ceo technically. So there's a lot of "top executives".
Insights from a real estate perspective: Most of the jobs that have the highest AI exposure are office jobs. Clerks, assistants, secretaries, software developers, bookkeepers, customer service, lawyers, etc. There has been a narrative the past couple years that office real estate was recovering as companies returned to office. If AI job losses materialize, it looks like there may be a second hit to that sector.
Like, IT helpdesk? Yes. Almost all of the tickets I create as a "knowledge worker" to my enterprise helpdesk are sloved by an AI assistant - group ownership, adding me to an app, etc.
I'm colorblind as well and what's fascinating to me is that this is the second AI created chart in a week I've seen that I can't read. Surprisingly I've found such agressively colorblind-unfriendly charts to be far less common when created by humans.
I don't have any color discrimination deficiencies, but it is my understanding that for various types of signage, the move has been towards RED=bad/danger/etc, and BLUE (instead of green)=good/safe/etc.
For color deficiencies, different lightnesses are safe e.g. dark for loss and light for gain (could be dark reds for loss and light greens for gain, but don't mix the lightnesses). Other options are icons/shapes (like up/down arrows) or pattern fills (like stripes for loss).
The general trick is you can rely on differences in color lightness, patterns, text and icons, but not differences in color hue. The page should be usable in grayscale.
Does the LLM understand or consider "rent seeking"? Lot's of high-paying jobs and entire industries seem to be propped-up by those same people who already have the power.
Cool site and Andrej is the man. But the BLS data...
> Taxi Drivers, Shuttle Drivers, and Chauffeurs
> Overall employment of taxi drivers, shuttle drivers, and chauffeurs is projected to grow 9 percent from 2024 to 2034, much faster than the average for all occupations.
I'd like to see this but - not sure if it is already - adjusted by total pay. so # employed * average salary.
A -4.0% hit to cashiers may have less of an impact than -4.0% to lawyers or another category that is propping up the middle of the economy with spending.
It's the other way around. Cashier's spend their 4 percent, where's the lawyers probably save it. Though of course median salary for the two categories means 4 percent change is different in absolute dollars
This (tech) career has proven to be so disappointing, and it's all the stuff around the actual work. I love working on computers.
Started my career in the decade of offshoring and didn't think we'd have anything close to an "AI" taking our jobs before we potentially unionized or had a government that would protect its labor force from being replaced by literal robots.
2020-2022 felt like the usa tech ship was finally growing into something really great. All gone now.
When I worked in devops I always worried that my job was automating away other engineers, it definitely had a "when will this come for me" feeling, because it really was, now the dev and ops are both getting automated away.
This is my first time looking at HN in practically a year. Tech is just so uninteresting to me now. Nobody is hiring SDE/SWE/SREs except for the problem makers, like Anthropic, Meta, etc. Anthropic has pages and pages of $300k-$600k roles open right now. But do you go help the rest of your colleagues lose their jobs?
I guess lets talk about kubernetes or something...
It's kinda cool to see a whole lot of otherwise intelligent people who are so dogmatically and ideologically opposed to anything AI that they're going to willfully dismiss anything that AI produces regardless of utility.
It's not great for them, but it's a definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit, instead of running an "AI bad" rubric as part of their filter.
AI has already won. It's taking over.
It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
Maybe it was linked from a comment somewhere on HN but just today I saw a post saying “Microwaves are the future of all food: if you don’t think so, you better get out of the kitchen”
Microwaves have already won. There will be a microwave in every home over the next few years.
Re: kitchen appliance analogies, I stand by my "AI is a dishwasher" analogy.
It's annoying that the dishes still have some pooled water in them when the cycle finishes; it doesn't always get everything perfectly clean; I have to know not to put the knives or the wooden stuff or anything fancy in it. But in spite of all of that, I use it every day, it's a huge productivity boost, and I'd hate to be without it.
This is a great analogy, because just like AI, microwaves are good for quick fixes, tasks where you don't really care about the quality and would rather minimise the effort.
A better analogy might be computers, self-driving cars, or humanoid robots, since unlike microwaves, they can actually improve. Meanwhile microwaves were more or less the same since their invention.
I know it's not the point of the comment but it's a bit of a flawed analogy. Microwaves have wone to a large extent, such that people without them are a bit of an oddity, and cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
AI being bad isn't in conflict with AI winning or taking over. I think all of those things are true. I think what we currently call social media is bad. And it's won. No conflict there either.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
No, AI has not "already" won. And phrasing it as you do, "It's taking over. It might be a year or two, or five, or ten" is an admission of that.
People may indeed not pause, but there's never any guarantee that the next step of progress is possible; whatever we reach may be all we can do, and we'll only find out when we get there. Or it might go hyperbolic and give us everything.
I'm not certain, but I suspect Jevons paradox is probably the wrong thing to bring up here, that's about cheaper stuff revealing more latent demand, and sure, that's possible and it may reveal a latent demand for everyone to build their own 1:1 scale model of the USS Enterprise (any of them) as a personal home, but we may also find that AI ends the economic incentives for consumerism which in turn remove a big driver to constantly have more stuff and demand goes down to something closer to a home being a living yurt made out of genetically modified photovoltaic vines that also give us unlimited free food.
(I mean, if we're talking about the AI future, why not push it?)
What I do think is worth bringing up is comparative advantage: Again, this is just an "I think", I'm absolutely not certain here, but if AI can supply all demand at unlimited volumes*, I think the assumptions behind comparative advantage, break.
> It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Yes, and I think they've also not even managed to figure out the internet yet.
* and AI may well be able to, even if all models collectively "only" reach the equivalent of a fully-rounded human of IQ 115; and yes I know IQ tests are dodgy, but we all know what they approximate, by "fully rounded" I mean that thing their steel-man form tries to approach, not test passing itself which would have the AI already beat that IQ score despite struggling with handling plates in a dishwasher.
Ah, the classic, forever-untestable "it's just around the corner" hypothesis.
I've lived through multiple "it's gonna be over in 12-18 months" arguments since November 2022. It's a truism for any technology to say that it's going to get better over time. But if you're convinced that "AI has already won", why not make a specific prediction? What jobs are going to be obsolete by when?
There are a lot of things that people were saying were fads that ended up being fads. There are also a lot of things that people were saying were fads that weren't. Nobody knows. Anyone who confidently says "AI is inevitable" or "AI is just a fad" is full of shit. They don't have a crystal ball, and they don't know what the future holds.
just because it was wrong once doesn't mean its never wrong. And was it really that wrong? The internet is great but would it be the worst thing in the world if we didn't live our lives around it?
>It's kinda cool to see a whole lot of otherwise intelligent people who are so dogmatically and ideologically opposed to anything AI that they're going to willfully dismiss anything that AI produces regardless of utility.
You'd probably put me into that bucket, although I'd disagree. I'm not at all against using AI to do something like: type up a high level summary of a product featureset for an executive that doesn't require deep technical accuracy.
What I AM against is: "summarize these million datapoints and into an output I can consume".
Why? Because the number of times I've already witnessed in the last year: someone using AI to build out their QBR deck or financial forecast, only to find out the AI completely hallucinated the numbers - makes my brain break. If I can't trust it to build an accurate graph of hard numbers without literally double checking all of its work, why would I bother in the first place?
In the same way, if you tell me you've got this amazing dataset that AI has built for you, my first thought is: I trust that about as much as the Iraqi Information Minister, because I've seen first hand the garbage output from supposedly the best AI platforms in the world.
*And to be clear: I absolutely think businesses across the board are replacing people with AI, and they can do so. And I also think it'll take 18+ months for someone to start asking questions only for them to figure out they've been directing the future of their company on garbage numbers that don't reflect reality.
Asking an LLM to analyze data directly doesn’t work. But they’re great at writing scripts to analyze (and visualize) data. Anthropic just figured this out last week and gave Claude a mode that does that for you.
He is talking about the same thing as you, no? As you point out, the more AI exposure (red), the more likely to have higher wages (green). Which suggests that those who are embracing AI are those who are thriving the most. Same as what he suggested.
AI is great for searching. I ll give you that. And that itself is a big deal. In software development, there is also real value provided by AI if you use it for code reviews. But I am not sure how much worth it would be if you have to retrain a model with new information just to give better search results and for code reviews..
Maybe that will be subsidized by all the people like you who want everything to be done by AI, for the rest of us to use it as a better search tool and use it for quick reviews..who knows!
Uh huh.. but the data in Andrej's visualizer is showing software development growth outlook is at 15% (much faster than average)
Over the past year (where Opus has supposedly changed the game), we're seeing ~10% more job postings for software developers compared to this time last year [1,2]
A huge amount of our work is not easily verifiable, therefore it's extremely hard to actually train an LLM to be better at it. It doesn't magically get better across the board.
AI HAS WON. SURF OR DROWN. YOU DONT KNOW WHATS COMING!!!?!?!
Stop with this doomer drivel. It's sick. It's not based in reality and all it does is stress innocent people out for no reason.
Ah HNs favorite strawman the "dogmatically and ideologically opposed to anything AI" person who, from my experience, largely doesn't exist.
However I was completely unimpressed with this tool when I saw it this weekend for two reasons:
The first is directly related to how this is built:
> These are rough LLM estimates, not rigorous predictions.
This visualization is neat (well except for reason number two), but it's pretty much just AI slop repackaged. There's no substance behind any of these predictions. Now I'm perfectly open to the critique that normal BLS predictions are also potentially slop, but I don't see how this is particularly valuable.
And the second, like 8% of male population I'm colorblind, so I can't read this chart.
For the record, I do agentic coding pretty much everyday, have shipped AI products, done work in AI research, etc.
Ironically, it's comments like yours that keep me the most skeptical. The fact that an attack on a strawman is the top comment really makes me feel like there is some sort of true mania here that I might even be a bit caught up in.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term.
I think AI is not going anywhere.
I also don't think the future will play out as you envision. AI is a very poor replacement for humans.
And I say this as a misanthrope who doesn't have a particular beef against AI.
I think a lot of the pushback comes down to your attitude. The way you're talking about AI is like how the crypto bros talked about bitcoin. Just being very insistent on your point of view is a red flag. Either you can present new data to convince people, or your insistence will just look like it's emotional rather than rational.
I use AI every day as part of my work, it's very unclear to me where it's going and we have no idea if we're on an exponent or S-curve. Now, normally people talk with conviction because they have more data. But one of the breakthroughs of crypto was this social convention of just have very strong opinions based on nothing. A lot of that culture has come over to AI.
Your comment typifies this, it's all about I need to get on board, AI has already won, you've got an advantage over me because you realise this.
Go back, look at the actual article you're commenting on. Did the AI analysis of job exposure provide anything of value. I'm not totally convinced it did, and you didn't even think about it. What critical thinking did you do about the data that came out of this dashboard.
What doesn’t make sense to me about the AI Inevitabilism Embrace Or Die trope is how there’s going to be a sudden trap door which will eliminate all the naysayers which can be avoided by Embrace. Because that doesn’t cohere well with how autonomuous AI is or will be.
I could understand if all the naysayers doing old fashioned stuff like work all of a sudden have no more work to do. But the AI Embracers will have what, in comparison? Five years of experience manipulating large language models that are smarter than them by a thousand fold?
> definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit
This cuts both ways...
> there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
What work do you think AI is going to replace? There are whole categories of people who are going to drown in the hubris of "AI being able to do the job" when it cant.
The moment one stops pretending that its going to be AI, that were getting AGI and views it as another tool the perspective changes. Strip away the hype and there is a LOT there... The walls of the garden are gonna get ripped down (Agents force the web open, and create security issues). They end lots of dark patterns, you cant make your crappy service hard to cancel... because an agent is more persistent to that. One size fits all software is going to face a reckoning (how many things are jammed into sales force sideways... that dont have to be). These things are existential threats to how our industry is TODAY, and no one seems to be talking about the impact to existing business models when the overhead of building software gets cut in half (and how it leads to more software not less).
I'm very confused how you can put up such an obvious strawman, say all these wildly unsubstantiated things, and yet still get engagement. Who are you even talking to?
It's been several years and nothing has changed except the AI grift is crumbling as we get out of the post-covid slump.
It is free for you to say this, because if you're wrong, there will be no consequences. Words are cheap. No different than various CEOs saying "AI will replace these workers" and now having to hire back those they laid off. Klarna, Salesforce, etc. Will be a great comment to reference in the future to capture the exuberance of the times.
> You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
The sad part isn't that this is low-effort AI slop, but that intelligent people and policy makers are going to see it and probably make important decisions impacting themselves and others based on these numbers.
This is 99.44% slop! You are completely correct. The "exposure" is based entirely on vibes and does not correspond to observable reality. Down here in the real world the very first sector that is being disrupted is manual farm labor. They are out here with machine vision and quadcopters picking fruit. But according to the prompt that produces the treemap, manual labor has an exposure rank of zero.
The VIEW could be AI slop, but underlying CONTENT has some meaning.
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
> The VIEW could be AI slop, but underlying CONTENT has some meaning. There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs
Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.
Data in, data out. Reminder that the Bureau of Labor Statistics (BLS) has been…well, I can’t find the right word.
https://apnews.com/article/trump-jobs-firing-f00e9bf96d01105...
Cool stuff. would be nice to have a color blind mode. I literally can't distinguish the red from green in this visualization.
Created a temp hack for you: https://gist.github.com/ro31337/89b24edaec0a5bfbf73bc5abfbfb...
(don't forget to "allow pasting" in [chrome] console first)
Came here to mention this. I'm also Red/Green colorblind.
There is also +/- % in the boxes
It's wild that there are as many jobs in the category "Top Executives" as in the category "Retail Sales Worker".
This makes sense given both automation and the US's role in the global economy, but it runs somewhat contrary to standard ideas of class and inequality.
That category has a median pay of $105,350, and includes "general and operations managers" as well as "chief executives". I assume it includes executives of very small enterprises.
https://www.bls.gov/ooh/management/top-executives.htm
I took one glance at the chart and decided the results were impossible because of that.
Apparently "top executive" median pay is $105,350 per year: https://www.bls.gov/ooh/management/top-executives.htm
Remember that exec tech salaries are extreme outliers. I worked for an exec in manufacturing. He had full p&l responsibility for a business segment with ~150 employees, $27 million in revenue at 40% gross margins, and a production plant. His total comp was ~$300k.
Now just think of the comp levels in sectors like government, education, etc.
2 replies →
Sounds plausible? Even a company with 100 employees and few growth prospects is likely to have a couple of executives, and most companies are small.
> it runs somewhat contrary to standard ideas of class and inequality.
Can you elaborate?
If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
I think AI outcomes distribute to contexts where it is used, and produce a change in how we work, what work we take on. Competition takes care of taking those surpluses and investing them in new structure, which becomes load bearing and we can't do without it anymore.
In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
Surplus becomes structure and the changed structure is something you can't function without. Like the cell and mitochondrion, after they merged they can't be apart, can't pay their costs individually anymore. Surplus is absorbed into the baseline cost.
> If AI produces surplus where does it go? Not talking about investment backed datacenter buildout and AI labs. Talking about the results of AI work...
The 1% pockets, this is where the vast majority of the extra productivity computers/internet/automation brought goes to for the last 50 years: https://www.epi.org/productivity-pay-gap/
The study doesn't say it went into the 1%'s pockets. It says it went to 2 places:
1) The salaries of corporate employees 2) Shareholders and capital owners
Regarding number 2: "Shareholders" would include anyone who owns any stock at all, including a lot of middle class people with a simple S&P 500 ETF in their portfolio.
And the increase in productivity allowed more people to become capital owners, AKA entrepreneurs. The explosion in software entrepreneurs, for example.
For my personal projects, any time saved on programming gets used up writing more ambitious programs.
For a business, the question is whether you can make more money by doing more ambitious things.
The surplus goes to the owners of the capital. Labor has been losing to capital for sometime now
It's up to business owners to decide. At the end of the day, in free market economy, goods become more affordable.
Agriculture is a good example of that: http://www.johnhearfield.com/History/Breadt.htm
If AI being a million billion zillion times more productive at doing bullshit jobs nets in very little economic gain, then that lays bare the net economic value of all our bullshit jobs.
But given that the stock market hasn't panicked, this must mean at least one of these premises is false:
1. Economic activity is relatively flat.
2. AI makes us a million billion zillion times more productive than we used to be.
3. The stock market is rooted in reality.
> lays bare the net economic value of all our bullshit jobs.
This was already obvious, the more important question is what are we (collectively, society & our governments) going to do about it?
We (should have) already known most of our jobs were bullshit jobs, especially white collar jobs. The difference is now we might have something coming that will eliminate the bullshit jobs.
But society will always need bullshit jobs or the whole system collapses. Not everyone can go dig ditches, so what do we do?
the market split from reality in 2020 for the last time. This is all just zeroes and ones, which is why they can make the real economy tank.
> In the end it looks like we are treading water, just like it was when computers got 1M times faster in a couple of decades, but we felt very little improvement in earnings or reduction in work.
I think this is a very important point. The hedonic treadmill means real gains are discounted. The novelty information cycle is like an Osborn Effect for improvements, like the semi-annual Popular Mechanic's flying car covers where there is an enticing future perpetually nearly here and at the same time disappointingly never materialized.
I think it's gonna mirror how the white collar classes, coastal elites, professional managerial class, whatever you want to call them, sold the countries industrial base to the far east. They got a little bit of money out of it but the biggest gains were the material wealth. $1 widgets instead of $2 widgets. All the people who weren't hurt by it got to live with more material plenty. Of course the nominal values of things didn't go down, but that's just inflation which is somewhat separate of an effect.
This time the jobs most in the crosshairs of AI are the jobs that constituted the paper pushing overhead of modern society, all the paper pushing jobs. Instead of $1 widgets from China replacing $2 domestic widgets it's gonna be $1 AI services replacing $2 services that require a real human.
This is hard to reason about because people tend to consume these kinds of services in big multi hundred or multi thousand dollar increments but in practice what it means is that when you have to engage an accountant, engineer, having something planned out in accordance with some standard, that will be substantially cheaper because of the reduced professional labor component.
And of course, as usual, the string pulling and in investor class will get fabulously wealthy along the way.
Data is coming from BLS. Their data lags the true state of affairs, and their growth projections are never reliable. Remember when they touted from 2000-2010 that Actuaries are the hottest growing field with the best forward looking outlook?
BLS forward looking guidance means nothing when technology revolutionizes the nature of work.
What do you believe is the true state of affairs?
With poor data it’s whatever you say it is.
1 reply →
lol i always wondered how actuary ever crossed the radar of my partner in college and this must have been it. hey they just finished up their FCAS cert and they are riding quite high and quite comfy. but it is for sure a very small pool of people just due to the immense work needed to get that point.
Are we assuming - the data is of high quality? If quality isn't good, it is as good as synthetic data.
It wasn't even that long ago that Trump fired the BLS Commissioner and nominated someone that would "restore GREATNESS" to the BLS.
Putting aside the slop facade place atop the data....why would we trust the data?
[dead]
>Software Developers +15%
Yay!
>Computer Programmers: -6%
Oh no
Software Developers median pay according to BLS: $131,450 per year
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Computer Programmers median pay according to BLS: $98,670 per year
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Software developers typically do the following:
- Analyze users’ needs and then design and develop software to meet those needs Recommend software upgrades for customers’ existing programs and systems Design each piece of an application or system and plan how the pieces will work together
- Create a variety of models and diagrams showing programmers the software code needed for an application
- Ensure that a program continues to function normally through software maintenance and testing
- Document every aspect of an application or system as a reference for future maintenance and upgrades
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Computer programmers typically do the following:
- Write programs in a variety of computer languages, such as C++ and Java
- Update and expand existing programs
- Test programs for errors and fix the faulty lines of computer code
- Create, modify, and test code or scripts in software that simplifies development
(Source: https://www.bls.gov/ooh/computer-and-information-technology/...)
Still look the same to me.
2 replies →
Right, I'm a Computer Programmer but any job with that title is likely horrible. But having the title Software Engineer doesn't magically make me an engineer. All word games.
I was wondering about that too. It shows 1.9M Software Developer Jobs and 122K Computer Programmer jobs.
Reason for hope
I'd chalk that up to a change in terminology over time, I could be wrong there though
The BLS classifies them as different roles. In essence: Software developers plan, computer programmers implement. Which in many cases might be the same person, but it has always been true that one person can hold multiple jobs.
I don't know?
They're saying that programmers will be declining. While Developers, and crucially, Testers and QA people will be increasing. That testers and QA become more important in the future sounds plausible to me in a future hypothetical world of ubiquitous AI.
All of that doesn't necessarily imply that the Developer class of employees will grow at the same rate as the Tester and QA classes of employees.
Interestingly, it seems from these statistics the median wage for individuals with a Master's is lower than a Bachelor's. I wonder if that's because of immigrants who pursue higher education for visa reasons skewing the data.
Anecdotally, many people get a bachelor's degree to check a box for job applications, whereas many people get a master's degree because they love the field and/or are afraid to leave school.
My friends and I who have a bachelor's degree in CS make more money than my friends who have or are working towards master's degrees in CS, because the former are working in the private sector and the latter are in academia making peanuts.
Other possible reason could be many or most Masters degrees not conferring additional pricing power, and those people’s Bachelors degrees also confer lower pricing power.
Edit: Another possible reason that Masters degrees were less common in the past, so the Bachelors pay statistics skew towards people with more work experience in their higher earning years, whereas the Masters pay statistics skew towards younger people with less work experience.
Masters seems to be a common theme in a few lower paying expansive fields like social work and education. I don't think that someone with a masters is typically making less in the same field all else equal.
My takeaway here: 3.XT $ of US salaries are the TAM for AI companies.
Apple, a very successful company, makes 300B/y revenue? (ish)
~10% is all you need to be Apple.
And, it can work by taking all of 10% of the jobs and collecting the whole salary (the AI employee -- dubious proposition),
or by taking 10% of everyone's salary and automating part of everyone's job (the AI "tool" -- much more plausible).
If "part" being automated is >10%, we all win in the long run, every company gets productivity growth without cost growth, etc etc.
If you add in data center costs, and multiple competing AI companies, and then expand the TAM to all white collar work worldwide, you can make everyone successful beyond their wildest dreams with a "20% of work for 20% of the cost" model. Again, how you distribute that 20% remains to be seen (20% new unemployment, or new 0% unemployment with "tools".
I formalized my thoughts here: https://jodavaho.io/posts/ai-jobpocolypse.html
The replace-work TAM is overstated because it fails to address transaction costs, which are astronomical when refactoring work and dislodging stakeholders with sunk costs. Coding is now the leading app for AI now because it had already been factored to support division of labor, outsourcing, and remote work.
It's also understated, because the real value of AI is not in replacing work, but making new products possible either because it's finally cheap enough to make them, or because -- AI.
Your math is missing the fact that Apple products are differentiated from their competitors. If AI becomes a ubiquitous commodity, it's not worth 300B/y.
Potable water is far more important than AI or iPads ever will be, but the world's most valuable water company only does about 5B/year in revenue: https://en.wikipedia.org/wiki/American_Water_Works
The bosses already hate their workers and are mad that they have to pay them a cent. Would they really accept paying another 10% on their wages to make their workers 10% more productive? When there is significant active competition between the providers of core models and huge pressure to reduce prices?
"TAM" = ?
Total addressable market.
Frequently seen as a big fun number in pitch decks. "The TAM for our new Coca-Cola killer is $1.6T: all humans who imbibe liquids on a regular basis. You simply MUST invest."
Total addressable market
Total addressable market
Mouse hover seems to be critical for this visualization. Not much useful in mobile.
Rendered via canvas. Ugh.
> You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
Are LLMs good at scoring? In my experience, using an LLM for scoring things usually produces arbitrary results. I'm surprised to see Karpathy employ it
The fact that the LLM appears to never assign an actual 0 or 10 makes me suspicious. Especially when the prompt includes explicit examples of what counts as a 10.
No. LLMs aren't experts in subjects. They can answer things in a confident manner, but nobody optimized LLMs to perform good analysis yet.
Let's ask the LLM to score how good it would be at scoring jobs from LLM exposure... /s
Are childcare and kindergarten teachers really exposed to AI? In theory, we could put a class of 30 children in front of chatbots with one supervisor. But I doubt we would chose to do this as a society. If office work becomes more automated, early childhood education is actually one area I'd expect to take up the Slack. I can't imagine a situation where we have millions of unemployed former office workers but we leave them idle and let our children waste away in front of screens.
Childcare and education requires a specific tolerance, mindset and passion to be effective though. I'd be curious how many previously-PMs or HR drones or email jockeys would be adequate (let alone thrive) in an environment where there are next-to-nonexistent budgets, and you're servicing literal babies and tiny children lol
On second thought, client service folks might do extremely well here!
>specific tolerance, mindset and passion
What you mention here is the exact thing why my earlier relationship went bust, because I didnt have any of these, then the children arrived :-X
Con: kids
In which theory? And if you can do anything in theory, then there is no justifiable "but" or any excuse. The only problem is your own ability to realize it or unexpected situation. A theory is a fact, a proven hypothesis, with all its parts such as formulas, laws, or a force as in the THEORY of gravitation. And no, you don't have one, and I assure you that you've never had a theory in your life.
There are a lot of education and curriculum companies pitching basically this- replace those 'expensive' teachers with aides making minimum wage as all they need to do is recite curriculum and help them log in to be evaluated.
I'd say yes. One teacher can use AI and be able to cover for more children. Thus less teachers are required.
How are Top Executives 4% of all jobs?
small business is the majority of employment. Think of an indi-coffee shop, the person taking your order may very well be the ceo technically. So there's a lot of "top executives".
Insights from a real estate perspective: Most of the jobs that have the highest AI exposure are office jobs. Clerks, assistants, secretaries, software developers, bookkeepers, customer service, lawyers, etc. There has been a narrative the past couple years that office real estate was recovering as companies returned to office. If AI job losses materialize, it looks like there may be a second hit to that sector.
Question for those in the know: are IT jobs being affected the same as software engineering jobs with all the consolidation and AI?
Whats the outlook like?
Thank you!
Like, IT helpdesk? Yes. Almost all of the tickets I create as a "knowledge worker" to my enterprise helpdesk are sloved by an AI assistant - group ownership, adding me to an app, etc.
Has some similar conclusions to my Job Quality-Adjusted Displacement Index https://github.com/quinndupont/JQADI
It just comes up a black screen for me. Is this happening to anyone else?
Do you have JavaScript disabled?
I wish there would be a color blind friendly version of this. I have deuteranopia and can’t distinguish red from green in the page.
I'm colorblind as well and what's fascinating to me is that this is the second AI created chart in a week I've seen that I can't read. Surprisingly I've found such agressively colorblind-unfriendly charts to be far less common when created by humans.
Out of curiosity, what colors (or text treatments) do you personally prefer to confer "gain" and "loss"?
I don't have any color discrimination deficiencies, but it is my understanding that for various types of signage, the move has been towards RED=bad/danger/etc, and BLUE (instead of green)=good/safe/etc.
For color deficiencies, different lightnesses are safe e.g. dark for loss and light for gain (could be dark reds for loss and light greens for gain, but don't mix the lightnesses). Other options are icons/shapes (like up/down arrows) or pattern fills (like stripes for loss).
The general trick is you can rely on differences in color lightness, patterns, text and icons, but not differences in color hue. The page should be usable in grayscale.
It's not about preference, but about being able to see the differences in the first place, so any sufficiently contrast combination should work.
If you turn on the color filters in accessibility settings in macOS you can see what the contrast could look like to a colorblind person.
Reminds me of trying to pick an available campsite on Parks Ontario.
Just curious, are there browser extensions that automatically can alter colors on a webpage to make them colorblind friendly?
Nifty!
Needs
- [utility] add filter by keyword / substring match, e.g majority of visualized reports are un-labeled requiring hovering with a mouse pointer
- [improve discovery] add sort by demographic / pop impact, e.g largest block is 7m ('Hand laborers and movers') and default sorted to bottom-left
Bet on high job-growth market: Security guard for data centers.
Stand in front with a gun while mobs come to burn down the data center that took their jobs.
(I think I'm half joking).
Too late!
1: https://www.businessinsider.com/robot-dogs-quadruped-data-ce...
Still need rocket defense though.
Oh please that will be done by drone and killer robots.
Does the LLM understand or consider "rent seeking"? Lot's of high-paying jobs and entire industries seem to be propped-up by those same people who already have the power.
It looks like this is using 2024 data so quite old?
Cool site and Andrej is the man. But the BLS data...
> Taxi Drivers, Shuttle Drivers, and Chauffeurs
> Overall employment of taxi drivers, shuttle drivers, and chauffeurs is projected to grow 9 percent from 2024 to 2034, much faster than the average for all occupations.
...word?
I just started reading Snow Crash; cool name!
I enjoy that this visualization directly contradicts the mainstream narrative that white collar work is being replaced by blue collar work.
Almost everyone I know is limited to two areas, and of those, 90% are in one corner.
I'd like to see this but - not sure if it is already - adjusted by total pay. so # employed * average salary.
A -4.0% hit to cashiers may have less of an impact than -4.0% to lawyers or another category that is propping up the middle of the economy with spending.
It's the other way around. Cashier's spend their 4 percent, where's the lawyers probably save it. Though of course median salary for the two categories means 4 percent change is different in absolute dollars
This (tech) career has proven to be so disappointing, and it's all the stuff around the actual work. I love working on computers.
Started my career in the decade of offshoring and didn't think we'd have anything close to an "AI" taking our jobs before we potentially unionized or had a government that would protect its labor force from being replaced by literal robots.
2020-2022 felt like the usa tech ship was finally growing into something really great. All gone now.
When I worked in devops I always worried that my job was automating away other engineers, it definitely had a "when will this come for me" feeling, because it really was, now the dev and ops are both getting automated away.
This is my first time looking at HN in practically a year. Tech is just so uninteresting to me now. Nobody is hiring SDE/SWE/SREs except for the problem makers, like Anthropic, Meta, etc. Anthropic has pages and pages of $300k-$600k roles open right now. But do you go help the rest of your colleagues lose their jobs?
I guess lets talk about kubernetes or something...
> This is not a report, a paper, or a serious economic publication — it is a development tool for exploring BLS data visually.
I guess that was to be expected...
This is a surprisingly bad treemap, which is surprising given how easy it is to build a good one these days.
"Please don't post shallow dismissals, especially of other people's work. A good critical comment teaches us something."
https://news.ycombinator.com/newsguidelines.html
Looks average to me. Maybe I don't know how the good ones look? Can you give some examples?
It's kinda cool to see a whole lot of otherwise intelligent people who are so dogmatically and ideologically opposed to anything AI that they're going to willfully dismiss anything that AI produces regardless of utility.
It's not great for them, but it's a definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit, instead of running an "AI bad" rubric as part of their filter.
AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
Maybe it was linked from a comment somewhere on HN but just today I saw a post saying “Microwaves are the future of all food: if you don’t think so, you better get out of the kitchen”
Microwaves have already won. There will be a microwave in every home over the next few years.
It’s time to start microwave cooking or drown
Re: kitchen appliance analogies, I stand by my "AI is a dishwasher" analogy.
It's annoying that the dishes still have some pooled water in them when the cycle finishes; it doesn't always get everything perfectly clean; I have to know not to put the knives or the wooden stuff or anything fancy in it. But in spite of all of that, I use it every day, it's a huge productivity boost, and I'd hate to be without it.
17 replies →
This is a great analogy, because just like AI, microwaves are good for quick fixes, tasks where you don't really care about the quality and would rather minimise the effort.
7 replies →
A better analogy might be computers, self-driving cars, or humanoid robots, since unlike microwaves, they can actually improve. Meanwhile microwaves were more or less the same since their invention.
2 replies →
I know it's not the point of the comment but it's a bit of a flawed analogy. Microwaves have wone to a large extent, such that people without them are a bit of an oddity, and cooking with an oven is more of a special occasion thing than the default cooking method that it was before.
26 replies →
Microwaves are the trend of the past! It sounds like you don't own an air fryer.
This was a real, unironic mindset for a while: https://a.co/d/0iYb8mlz
What is the argument here? Someone had a wrong take on something completely unrelated, so it somehow applies to this?
You think "there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term" is wrong?
Maybe you could have an AI explain to you how bad this analogy is.
4 replies →
AI being bad isn't in conflict with AI winning or taking over. I think all of those things are true. I think what we currently call social media is bad. And it's won. No conflict there either.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term. Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
No, AI has not "already" won. And phrasing it as you do, "It's taking over. It might be a year or two, or five, or ten" is an admission of that.
People may indeed not pause, but there's never any guarantee that the next step of progress is possible; whatever we reach may be all we can do, and we'll only find out when we get there. Or it might go hyperbolic and give us everything.
I'm not certain, but I suspect Jevons paradox is probably the wrong thing to bring up here, that's about cheaper stuff revealing more latent demand, and sure, that's possible and it may reveal a latent demand for everyone to build their own 1:1 scale model of the USS Enterprise (any of them) as a personal home, but we may also find that AI ends the economic incentives for consumerism which in turn remove a big driver to constantly have more stuff and demand goes down to something closer to a home being a living yurt made out of genetically modified photovoltaic vines that also give us unlimited free food.
(I mean, if we're talking about the AI future, why not push it?)
What I do think is worth bringing up is comparative advantage: Again, this is just an "I think", I'm absolutely not certain here, but if AI can supply all demand at unlimited volumes*, I think the assumptions behind comparative advantage, break.
> It's time to surf or drown, because it doesn't look like any of the people in charge have the slightest clue about how to handle what's coming.
Yes, and I think they've also not even managed to figure out the internet yet.
* and AI may well be able to, even if all models collectively "only" reach the equivalent of a fully-rounded human of IQ 115; and yes I know IQ tests are dodgy, but we all know what they approximate, by "fully rounded" I mean that thing their steel-man form tries to approach, not test passing itself which would have the AI already beat that IQ score despite struggling with handling plates in a dishwasher.
> It might be a year or two, or five, or ten
Ah, the classic, forever-untestable "it's just around the corner" hypothesis.
I've lived through multiple "it's gonna be over in 12-18 months" arguments since November 2022. It's a truism for any technology to say that it's going to get better over time. But if you're convinced that "AI has already won", why not make a specific prediction? What jobs are going to be obsolete by when?
If what you state comes to pass there will be no "surfing" when it comes to cognitive work.
shhh just surf that deadly tsunami bro
The raw anti Ai hate for anything that even mentions it makes me think of the early days of the internet where it was considered just a fad.
In the early days of the internet, there were roughly 3 categories of views I remember:
1. Brick and mortar is dead.
2. The internet will die.
3. What is the business model? (this one still seems to exist to this day to some extent, lol)
Reality fell between 1 and 2.
4 replies →
There are a lot of things that people were saying were fads that ended up being fads. There are also a lot of things that people were saying were fads that weren't. Nobody knows. Anyone who confidently says "AI is inevitable" or "AI is just a fad" is full of shit. They don't have a crystal ball, and they don't know what the future holds.
skepticism is so shamed around here.
just because it was wrong once doesn't mean its never wrong. And was it really that wrong? The internet is great but would it be the worst thing in the world if we didn't live our lives around it?
I would say that it is closer to an Internet to which you are not invited to.
[dead]
A surfboard is no use in a tsunami. You will drown. The author will drown. Do not celebrate the tsunami.
Amen. Well said
OP comment is not clever
2 replies →
> Jevons paradox isn't relevant to cognitive surplus - you need a very different model to capture what's going to happen.
Jevons paradox was never relevant to cognitive surplus. That isn't what it's about.
Cognitive surplus only strengthens Jevons paradox. Humans are a competitive advantage for businesses in a world dominated by human needs
>It's kinda cool to see a whole lot of otherwise intelligent people who are so dogmatically and ideologically opposed to anything AI that they're going to willfully dismiss anything that AI produces regardless of utility.
You'd probably put me into that bucket, although I'd disagree. I'm not at all against using AI to do something like: type up a high level summary of a product featureset for an executive that doesn't require deep technical accuracy.
What I AM against is: "summarize these million datapoints and into an output I can consume".
Why? Because the number of times I've already witnessed in the last year: someone using AI to build out their QBR deck or financial forecast, only to find out the AI completely hallucinated the numbers - makes my brain break. If I can't trust it to build an accurate graph of hard numbers without literally double checking all of its work, why would I bother in the first place?
In the same way, if you tell me you've got this amazing dataset that AI has built for you, my first thought is: I trust that about as much as the Iraqi Information Minister, because I've seen first hand the garbage output from supposedly the best AI platforms in the world.
*And to be clear: I absolutely think businesses across the board are replacing people with AI, and they can do so. And I also think it'll take 18+ months for someone to start asking questions only for them to figure out they've been directing the future of their company on garbage numbers that don't reflect reality.
Asking an LLM to analyze data directly doesn’t work. But they’re great at writing scripts to analyze (and visualize) data. Anthropic just figured this out last week and gave Claude a mode that does that for you.
2 replies →
I have found:
Published AI generated code is a mild negative signal for quality, but certainly not a fatal one.
Published AI generated English writing is worthless and should be automatically ignored.
> Jevons paradox isn't relevant to cognitive surplus
Could you elaborate on this? Is it just a claim, or is there some consensus out there based on something that it doesn't/shouldn't apply?
Assume I want to believe exactly what you're saying. What is that, though?
a. "Has already won"
b. "Might be a year or two, or five, or ten"
You’re playing chess. You see that you have a forced mate in several moves. You’ve already won, but it will happen in 2, or 5, or 10 moves.
1 reply →
The best predictor I can find of a segment being green in median wage is if it's red in digital AI exposure.
So... What exactly are you talking about?
He is talking about the same thing as you, no? As you point out, the more AI exposure (red), the more likely to have higher wages (green). Which suggests that those who are embracing AI are those who are thriving the most. Same as what he suggested.
1 reply →
>opposed to anything AI
AI is great for searching. I ll give you that. And that itself is a big deal. In software development, there is also real value provided by AI if you use it for code reviews. But I am not sure how much worth it would be if you have to retrain a model with new information just to give better search results and for code reviews..
Maybe that will be subsidized by all the people like you who want everything to be done by AI, for the rest of us to use it as a better search tool and use it for quick reviews..who knows!
Uh huh.. but the data in Andrej's visualizer is showing software development growth outlook is at 15% (much faster than average)
Over the past year (where Opus has supposedly changed the game), we're seeing ~10% more job postings for software developers compared to this time last year [1,2]
A huge amount of our work is not easily verifiable, therefore it's extremely hard to actually train an LLM to be better at it. It doesn't magically get better across the board.
AI HAS WON. SURF OR DROWN. YOU DONT KNOW WHATS COMING!!!?!?!
Stop with this doomer drivel. It's sick. It's not based in reality and all it does is stress innocent people out for no reason.
1: https://fred.stlouisfed.org/series/IHLIDXUSTPSOFTDEVE
2: https://trueup.io/job-trend
Ah HNs favorite strawman the "dogmatically and ideologically opposed to anything AI" person who, from my experience, largely doesn't exist.
However I was completely unimpressed with this tool when I saw it this weekend for two reasons:
The first is directly related to how this is built:
> These are rough LLM estimates, not rigorous predictions.
This visualization is neat (well except for reason number two), but it's pretty much just AI slop repackaged. There's no substance behind any of these predictions. Now I'm perfectly open to the critique that normal BLS predictions are also potentially slop, but I don't see how this is particularly valuable.
And the second, like 8% of male population I'm colorblind, so I can't read this chart.
For the record, I do agentic coding pretty much everyday, have shipped AI products, done work in AI research, etc.
Ironically, it's comments like yours that keep me the most skeptical. The fact that an attack on a strawman is the top comment really makes me feel like there is some sort of true mania here that I might even be a bit caught up in.
> AI has already won. It's taking over. It might be a year or two, or five, or ten, but AI isn't slowing down, nobody is going to pause, and there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term.
I think AI is not going anywhere.
I also don't think the future will play out as you envision. AI is a very poor replacement for humans.
And I say this as a misanthrope who doesn't have a particular beef against AI.
I think a lot of the pushback comes down to your attitude. The way you're talking about AI is like how the crypto bros talked about bitcoin. Just being very insistent on your point of view is a red flag. Either you can present new data to convince people, or your insistence will just look like it's emotional rather than rational.
I use AI every day as part of my work, it's very unclear to me where it's going and we have no idea if we're on an exponent or S-curve. Now, normally people talk with conviction because they have more data. But one of the breakthroughs of crypto was this social convention of just have very strong opinions based on nothing. A lot of that culture has come over to AI.
Your comment typifies this, it's all about I need to get on board, AI has already won, you've got an advantage over me because you realise this.
Go back, look at the actual article you're commenting on. Did the AI analysis of job exposure provide anything of value. I'm not totally convinced it did, and you didn't even think about it. What critical thinking did you do about the data that came out of this dashboard.
What doesn’t make sense to me about the AI Inevitabilism Embrace Or Die trope is how there’s going to be a sudden trap door which will eliminate all the naysayers which can be avoided by Embrace. Because that doesn’t cohere well with how autonomuous AI is or will be.
I could understand if all the naysayers doing old fashioned stuff like work all of a sudden have no more work to do. But the AI Embracers will have what, in comparison? Five years of experience manipulating large language models that are smarter than them by a thousand fold?
> AI has already won. [...] It might be a year or two, or five, or ten
brainbroken by chatbots lmao
Wishful thinking. AI is useful, but it’s far more niche than militantly pro-AI people like you want to believe. It’s a useful tool, nothing more.
I immediately turn off when I read extreme points of view. It tells me they are people that lack traits such as nuance - essentially time wasters.
"AI has already won. It's taking over. "
Man.. I suggest you touch some grass. You are living in a bubble.
Eh, idk about this. One nice thing about humans is that they still feed themselves when the economy collapses.
> definite advantage for people who are already in the mindset of distinguishing and discriminating information and sources on merit
This cuts both ways...
> there's a whole shit ton of work people do that won't be meaningful or economically relevant in the very near term
What work do you think AI is going to replace? There are whole categories of people who are going to drown in the hubris of "AI being able to do the job" when it cant.
The moment one stops pretending that its going to be AI, that were getting AGI and views it as another tool the perspective changes. Strip away the hype and there is a LOT there... The walls of the garden are gonna get ripped down (Agents force the web open, and create security issues). They end lots of dark patterns, you cant make your crappy service hard to cancel... because an agent is more persistent to that. One size fits all software is going to face a reckoning (how many things are jammed into sales force sideways... that dont have to be). These things are existential threats to how our industry is TODAY, and no one seems to be talking about the impact to existing business models when the overhead of building software gets cut in half (and how it leads to more software not less).
I'm very confused how you can put up such an obvious strawman, say all these wildly unsubstantiated things, and yet still get engagement. Who are you even talking to?
It's been several years and nothing has changed except the AI grift is crumbling as we get out of the post-covid slump.
It is free for you to say this, because if you're wrong, there will be no consequences. Words are cheap. No different than various CEOs saying "AI will replace these workers" and now having to hire back those they laid off. Klarna, Salesforce, etc. Will be a great comment to reference in the future to capture the exuberance of the times.
Companies Are Laying Off Workers Because of AI’s Potential - Not Its Performance - > The AI premium isn’t even reliable. By late 2025, Goldman Sachs group Inc. found that investors were actually punishing AI-attributed layoffs, with shares falling an average of 2%. The analysts concluded that investors simply didn’t believe the companies. But Block’s surge shows the incentive hasn’t vanished. It’s just a lottery instead of a sure thing. And executives keep buying tickets.jeffbee
2 hours ago
paxys
3 hours ago
jeffbee
2 hours ago
ryguz
1 hour ago
zenon_paradox
4 hours ago
tencentshill
3 hours ago
bigyikes
3 hours ago
paxys
3 hours ago
faangguyindia
3 hours ago
orphea
3 hours ago
danny_codes
3 hours ago
faangguyindia
3 hours ago
KellyCriterion
2 hours ago
throwaw12
3 hours ago
nerevarthelame
3 hours ago
> The broader data confirms the gap between narrative and reality. A National Bureau of Economic Research study published in February surveyed thousands of C-suite executives across the US, UK, Germany and Australia. Almost 90% said AI had zero impact on employment over the past three years. Challenger, Gray & Christmas tracked 1.2 million layoffs in 2025, and AI was cited in fewer than 55,000 of them. That’s 4.5%. Plain old “market and economic conditions” accounted for four times as many.
So! Sophisticated capital market participants don't believe this; why do people here?
AI is making CEOs delusional [video] - https://www.youtube.com/watch?v=Q6nem-F8AG8
I'll tell you what AI apparently can't replace and that is information graphics designers who are familiar with the exotic detail known as "contrast".
> You are an expert analyst evaluating how exposed different occupations are to AI. You will be given a detailed description of an occupation from the Bureau of Labor Statistics.
> Rate the occupation's overall AI Exposure on a scale from 0 to 10.
The sad part isn't that this is low-effort AI slop, but that intelligent people and policy makers are going to see it and probably make important decisions impacting themselves and others based on these numbers.
This is 99.44% slop! You are completely correct. The "exposure" is based entirely on vibes and does not correspond to observable reality. Down here in the real world the very first sector that is being disrupted is manual farm labor. They are out here with machine vision and quadcopters picking fruit. But according to the prompt that produces the treemap, manual labor has an exposure rank of zero.
[dead]
[dead]
This is an AI slop website the same as spammed on Show HN. Doesn't matter if the author is incredibly wealthy.
Does it matter if the author is a renowned expert in the field?
Expert in AI != expert in labor markets.
All the "research" on the site comes from a single LLM prompt.
Show HN isn't about being expert.
No. You can be an expert in one field, and have no idea what you're doing in another.
5 replies →
[dead]
Meanwhile my show HN didn't even get single upvote or any comment. Sigh...
Maybe you forgot AI, or blockchain or DeFi or all of these three? :-D /s
The VIEW could be AI slop, but underlying CONTENT has some meaning.
There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs, companies are squeezing every bit of dev slack time to produce more stuff with AI.
> The VIEW could be AI slop, but underlying CONTENT has some meaning. There is definitely impact on Software engineering jobs at the moment, interns/juniors are struggling to find jobs
Is that notion supported by this content? The BLS Outlook for most software engineering jobs is most in the "much faster than average" growth range.
3 replies →