Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:
1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
"The only reason to reduce headcount is to remove people who already weren’t providing much value."
I wish corporations really acted this rationally.
At least where I live hospitals fired most secretaries and assistants to doctors a long time ago. The end result? High-paid doctors spending significant portion of their time on administrative and bureaucratic tasks that were previously handled by those secretaries, preventing them from seeing as many patients as they otherwise would. Cost savings may look good on spreadsheet, but really the overall efficiency of the system suffered.
That's what I see when companies cut juniors as well. AI cannot replace a junior because a junior has full and complete agency, accountability, and purpose. They retain learning and become a sharper bespoke resource for the business as time goes on. The PM tells them what to do and I give them guidance.
If you take away the juniors, you are now asking your seniors to do that work instead which is more expensive and wasteful. The PM cannot tell the AI junior what to do for they don't know how. Then you say, hey we also want you to babysit the LLM to increase productivity, well I can't leave a task with the LLM and come back to it tomorrow. Now I am wasting two types of time.
But wouldnt these spreadsheets be tracking something like total revenue? If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?
I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.
I'm a full-stack developer, Recently i find that almost 90% of my work deadlines have been brought forward, and the bosses' scheduling has become stricter. the coworker who is particularly good at pair programming with AI prefers to reduce his/her scheduling(kind of unconsciously)。Work is sudden,but salary remains steady。what a bummer
Disagreed. You need more doctors, not useless secretaries.
Generating bureaucratic bullshit doesn't make any work go faster; it actually just creates more work at best and, in general, just slows everything down.
It is perfect that the primary stakeholder is responsible of his own bureaucratic impact. This way he'll learn to generate the minimum amount that is viable to be efficient.
Otherwise they don't care and generate waste by the metric ton.
Because of the French hospital bureaucratic nightmare, for a simple 15-minute intervention (cyst removal), I had 2 appointments and received 4 different letters by post. Not only did they waste more of my time than necessary (every time you need to wait about 45 minutes before anything happens), but since the physician cannot be duplicated and I had to meet him each time, nothing of value was gained as well.
With modern technologies, secretaries should barely exist. They still do because it's all about the laws and compliance; everyone is protecting his ass first and foremost. Without this, a system without the bureaucracy would be much more efficient. It's how they do it outside the western world basically.
Funny the original post doesn’t mention AI replacing the coding part of his job.
There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.
I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.
>There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull).
The most interesting questions are the ones that assume human equivalency.
Suppose an AI can produce like a human.
Are you ok with merging that code without human review?
Are you ok with having a codebase that is effectively a black box?
Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes?
Are you ok with being dependent on the company providing this code generation?
Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them?
Will we be ok if the well of public technical discussion LLMs are feeding from dries up?
Well, I would just say to take into account the fact that we're starting to see LLMs be responsible for substantial electricity use, to the point that AI companies are lobbying for (significant) added capacity. And remember that we're all getting these sub-optimal toys at such a steep discount that it would be price gouging if everyone weren't doing it.
Basically, there's an upper limit even to how much we can get out of the LLMs we have, and it's more expensive than it seems to be.
Not to mention, poorly-functioning software companies won't be made any better by AI. Right now there's a lot of hype behind AI, but IMO it's very much an "emperor has no clothes" sort of situation. We're all just waiting for someone important enough to admit it.
I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.
If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”.
Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem
I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”.
The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs
idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients"
It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet
> There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves...
> The only reason to reduce headcount is to remove people who already weren’t providing much value.
There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter.
All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling.
I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing.
The thing that replaces the old memos is not email, its meetings. It not uncommon for meetings with hundreds of participants that in the past would be a simple memo.
It would be amazing if LLMs could replace the role that meetings has in communication, but somehow I strongly doubt that will happens. It is a fun idea to have my AI talk with your AI so no one need to actually communicate, but the result is more likely to create barriers for communication than to help it.
This is a very insightful take. People forget that there is competition between corporations and nations that drives an arms race. The humans at risk of job displacement are the ones who lack the skill and experience to oversee the robots. But if one company/nation has a workforce that is effectively 1000x, then the next company/nation needs to compete. The companies/countries that retire their humans and try to automate everything will be out-competed by companies/countries that use humans and robots together to maximum effect.
Overseeing robot is a time limited activity. Even building robot has a finite horizon.
Current tech can't yet replace everything but many jobs already see the horizon or are at sunset.
Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement.
This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus.
Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there?
Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs.
I think you’ve missed the point. Cars replaced horses - it wasn’t cars+horses that won. Computers replaced humans as the best chess players, not computers with human oversight. If successful, the end state is full automation because it’s strictly superhuman and scales way more easily.
I think the big problem here though, is that humans go from being mandatory to being optional, and this changes the competitive landscape between employers and workers.
In the past a strike mattered. With robots, it may have to go on for years to matter.
> most companies will still have more work to do than resources to assign to those tasks
This is very important yet rarely talked about. Having worked in a well-run group on a very successful product I could see that no matter how many people would be on a project there was alway too much work. And always too many projects. I am no longer with the company but I can see some of the ideas talked about back then being launched now, many years later. For a complex product there is always more to do and AI would simply accelerate development.
Yip, the famous example here being John Maynard Keynes, of Keynesian economics. [1] He predicted a 15 hour work week following productivity gains that we have long since surpassed. And not only did he think we'd have a 15 hour work week, he felt that it'd be mostly voluntary - with people working that much only to give themselves a sense of purpose and accomplishment.
Instead our productivity went way above anything he could imagine, yet there was no radical shift in labor. We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. He underestimated how many people were willing to designate the pursuit of wealth as the meaning of life itself.
Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours.
At least since the Industrial Revolution, and probably before, the only advance that has led to shorter work weeks is unions and worker protections. Not technology.
Technology may create more surplus (food, goods, etc) but there’s no guarantee what form that surplus will reach workers as, if it does at all.
In the same essay ("Economic Possibilities for our Grandchildren," 1930) where he predicted the 15-hour workweek, Keynes wrote about how future generations would view the hoarding of money for money's sake as criminally insane.
"There are changes in other spheres too which we must expect to come. When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard."
> We just instead started making billionaires by the thousand, and soon enough we can add trillionaires.
Didn’t we also get standards of living much higher than he would ever imagine? I think blaming everything on billionaires is really misguided and shallow.
I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once."
I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.
Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".
For what it's worth, the decline in use of horses was much slower than you might expect. The model T Ford motor car reached peak production in 1925 [0], and for an inexact comparison (I couldn't find numbers for the US) the horse population of France started to decline in 1935, but didn't drop below 80% of its historical peak until the late 1940's down to 10% of its peak by the 1970's [1].
If there’s more work than resources, then is that low value work or is there a reason the business is unable to increase resources? AI as a race to the bottom may be productive but not sure it will be societally good.
Not low-value or it just wouldn't be on the board. Lower value? Maybe, but there are many, many reasons things get pushed down the backlog. As many reasons as there are kinds of companies. Most people don't work at one of the big tech companies where work priorities and business value are so stratified. There are businesses that experience seasonality, so many of the R&D activities get put on the backburner until the busy season is over. There are businesses that have high correctness standards, where bigger changes require more scrutiny, are harder to fit into a sprint, and end up getting passed over for smaller tasks. And some businesses just require a lot of contextual knowledge. I wouldn't trust an AI to do a payroll calculation or tabulate votes, for instance, any more than I would trust a brand new employee to dive into the deep end on those tasks.
> 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
They have more work to do until they don't.
The number of bank tellers went up for a while after the invention of the ATM, but then it went down, because all the demand was saturated.
We still need food, farming hasn't stopped being a thing, nevertheless we went from 80-95% of us working in agriculture and fishing to about 1-5%, and even with just those percentages working in that sector we have more people over-eating than under-eating.
As this transition happened, people were unemployed, they did move to cities to find work, there were real social problems caused by this. It happened at the same time that cottage industries were getting automated, hand looms becoming power-looms, weaving becoming programmable with punch cards. This is why communism was invented when it was invented, why it became popular when it did.
And now we have fast-fashion, with clothes so fragile that they might not last one wash, and yet still spend a lower percentage of our incomes on clothes than the pre-industrial age did. Even when demand is boosted by having clothes that don't last, we still make enough to supply demand.
Lumberjacks still exist despite chainsaws, and are so efficient with them that the problem is we may run out of rainforests.
Are there any switchboard operators around any more, in the original sense? If I read this right, the BLS groups them together with "Answering Service", and I'm not sure how this other group then differs from a customer support line: https://www.bls.gov/oes/2023/may/oes432011.htm
> 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
This would be absolutely correct — I've made the analogy to Amdahl's law myself previously — if LLMs didn't also do so many of the other things. I mean, the linked blog post is about answering new-starter questions, which is also not the only thing people get paid to do.
Now, don't get me wrong, I accept the limitations of all the current models. I'm currently fairly skeptical that the line will continue to go up as it has been for very much longer… but "very much longer" in this case is 1-2 years, room for 2-4 doublings on the METR metric.
Also, I expect LLMs to be worse at project management than at writing code, because code quality can be improved by self-play and reading compiler errors, whereas PM has slower feedback. So I do expect "manage the AI" to be a job for much longer than "write code by hand".
But at the same time, you absolutely can use an LLM to be a PM. I bet all the PMs will be able to supply anecdotes about LLMs screwing up just like all the rest of us can, but it's still a job task that this generation of AI is still automating at the same time as all the other bits.
I agree mostly, though personally I expect LLMs to basically give me whitewashing. They don't innovate. They don't push back enough or take a step back to reset the conversation. They can't even remember something I told them not to do 2 messages ago unless I twist their arm. This is what they are, as a technology. They'll get better. I think there's some impact associated with this, but it's not a doomsday scenario like people are pretending.
We are talking about trying to build a thing we don't even truly understand ourselves. It reminds me of That Hideous Strength where the scientists are trying to imitate life by pumping blood into the post-guillotine head of a famous scientist. Like, we can make LLMs do things where we point and say, "See! It's alive!" But in the end people are still pulling all the strings, and there's no evidence that this is going to change.
An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.
I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.
I think a lot about how much we altered our environment to suit cars. They're not a perfect solution to transport, but they've been so useful we've built tons more road to accommodate them.
So, while I don't think AGI will happen any time soon, I wonder what 'roads' we'll build to squeeze the most out of our current AI. Probably tons of power generation.
This is a really interesting observation! Cars don't have to dominate our city design, and yet they do in many places. In the USA, you basically only have NYC and a few less convenient cities to avoid a city designed for cars. Society has largely been reshaped with the assumption that cars will be used whether or not you'd like to use one.
What would that look like for navigating life without AI? Living in a community similar to the Amish or Hasidic Jews that don't integrate technology in their lives as much as the average person does? That's a much more extreme lifestyle change than moving to NYC to get away from cars.
"Tons of power generation?" Perhaps we will go in that direction (as OpenAI projects), but it assumes the juice will be worth the squeeze, i.e., that scaling laws requiring much more power for LLM training and/or inference will deliver a qualitatively better product before they run out. The failure of GPT 4.5, while not a definitive end to scaling, was a pretty discouraging sign.
We didn't just build roads, we utterly changed land-use patterns to suit them.
Cities, towns, and villages (and there were far more of the latter then) weren't walkable out of choice, but necessity. At most, by the late 19th century, urban geography was walkable-from-the-streetcar, and suburbs walkable-from-railway-station. And that only in the comparatively few metros and metro regions which had well-developed streetcar and commuter-rail lines.
With automobiles, housing spread out, became single-family, nuclear-family, often single-storey, and frequently on large lots. That's not viable when your only options to get someplace are by foot, or perhaps bicycle. Shopping moved from dense downtowns and city-centres (or perhaps shopping districts in larger cities) to strips and boulevards. Supermarkets and hypermarkets replaced corner grocery stores (which you could walk to and from with your groceries in hand, or perhaps in a cart). Eventually shopping malls were created (virtually always well away from any transit service, whether bus or rail), commercial islands in shopping-lot lakes. Big-box stores dittos.
It's not just roads and car parks, it's the entire urban landscape.
AI, should this current fad continue and succeed, will likely have similarly profound secondary effects.
Remember, these companies (including the author) have an incentive to continue selling fear of job displacement not because of how disruptive LLMs are, but because of how profitable it is if you scare everyone into using your product to “survive”.
To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.
Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.
> An engine performs a simple mechanical operation
It only appears “simple” because you're used to see working engines everywhere without never having to maintain them, but neither the previous generations nor the engineers working on modern engines would agree with you on that.
An engine performs “a simple mechanical operation” the same way an LLM performs a “simple computation”.
People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.
The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.
If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.
Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. In the 50's and 60's we replaced all these people with computers. An entire career of "bank clerk" vanished, and it was a net good for humanity. The cost of bank transactions came down (by a lot!), banks became more responsive and served their customers better. And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.
There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. They're boring jobs (for most people doing them) and having humans do them makes administration slow and expensive. Automating them will be a net good for humanity. Imagine if "this meeting could have been an email" actually moves to "this meeting never happened at all because the person making the decision just told the LLM and it did it".
You are right that the danger is that most of the benefits of this automation will accrue to capital, but this didn't happen with the bank clerk automation - bank customers accrued a lot of the benefits too. I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better.
"I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better."
I'm not sure most of those organizations will have many customers left, if every white collar admin job has been automated away, and all those people are sitting unemployed with whatever little income their country's social safety net provides.
Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living.
An ATM is a reliable machine with a bounded risk - the money inside - while an AI agent could steer your company into bankruptcy and have no liability for it. AI has no skin and depending on application, much higher upper bound for damage. A digit read wrong in a medical transcript, patient dies.
> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.
Managing risks, can't automate it. Every project and task needs a responsibility sink.
> Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive.
>
> And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.
I don't mean to pick on your example too much. However, when I worked in financial audit, reviewing journal entries spit out from SAP was mind numbingly boring. I loved doing double-entry bookkeeping in my college courses. Modern public accounting is much, much more boring and worse work than it was before. Balancing entries is enjoyable to me. Interacting with the terrible software tools is horrific.
I guess people that would have done accounting are doing other, hopefully more interesting jobs in the sense that absolute numbers of US accountants is on a large decline due to the low pay and the highly boring work. I myself am certainly one of them as a software engineer career switcher. But the actual work for a modern accountant has not been improved in terms of interesting tasks to do. It's also become the email + meetings + spreadsheet that you mentioned because there wasn't much else for it to evolve into.
AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.
The owners of the tech need to reinvest in the hosts.
Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form.
There's only so much you can learn from humans. AI didn't get superhuman in go (game) by financing more new good human go players. It just played with itself even discarding human source knowledge and achieved those levels.
> What happens when there are no more hosts to donate more training-blood?
LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world.
I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.
But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.
I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.
Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.
> It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent.
I may have developed some kind of paranoia reading HN recently, but the AI atmosphere is absolutely nuts to me. Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population? And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective? And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?". And the guy writing this works at Anthropic? The very guy who makes this thing happen, but is only able to conclude this with "I very much hope we'll get the two decades that horses did". What the hell.
I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity and so many of its outputs. I see it in the writing of leaders within VC firms and AI companies but I also see it in ordinary conversations on the caltrain or in coffee shops.
Friendship, love, sex, art, even faith and childrearing are opportunities for substitution with AI. Ask an AI to create a joke for you at a party. Ask an AI to write a heartfelt letter to somebody you respect. Have an AI make a digital likeness of your grandmother so you can spend time with her forever. Have an AI tell you what you should say to your child when they are sad.
If you want another side data point, most people I know both in Japan and Canada use some sort of an AI as a replacement for any kind of query. Almost nobody in my circles are in tech or tech-adjacent circles.
So yeah, it’s just everyone collectively devaluing human interaction.
Making predictions on how it will turn out VS designing how it should be. Up til now, powerful people needed lots and lots of other humans to sustain their power&life. Thus that dependency gave the masses leverage. Now I'd like a society we're everyone is valued for being human and stuff. With democracies we got quite far in that direction. Attempts to go even further... Let's just say "didn't work out". And right now, especially in the US, the societal system seems to go back to "power" instead rules.
Yeah, I see a bleak future ahead. Guess that's life, after all.
Those nerds can now develop an AI robot to make love to their wives while they get back to blogging about accelerationism with all the time they freed up.
I can't say I'm shocked. Disappointed, maybe, but it's hardly surprising to see the sociopathic nature in the people fighting tooth and nail for the validation of venture capitalists who will not be happy until they own every single cent on earth.
There are good people everywhere, but bring good and ethical stands in the way of making money, so most of the good people lose out in the end.
AI is the perfect technology for those who see people as complaining cogs in an economic machine. The current AI bubble is the first major advancement where these people go mask off; when people unapologetically started trying to replace basic art and culture with "efficient" machines, people started noticing.
I think, like the Bill Gates haters who interpret him talking about reducing the rate of birth in Africa as wanting to kill Africans, you're interpreting it wrong.
The graph says horse ownership per person. People probably stopped buying horses, they let theirs retire (well, to be honest, probably also sent to the glue factory), and when they stopped buying new horses, horse breeding programs slowed down.
One could argue that the quality of life per horse went up, even if the total number of horses went down. Lots more horses now get raised in farms and are trained to participate in events like dressage and other equestrian sports.
We don't know what the author had in mind, but one has to really be tone deaf to let the weirdness of the discussion go unnoticed. Take a look at the last paragraphs in the text again:
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
While most of the text is written from cold economic(ish) standpoint it is really hard not to get bleak impression from it. And the last three sentences express that in vague way too. Some ambiguity is left on purpose so you can interpret the daunting impression your way.
The article presents you with crushing juxtaposition, implicates insane dangers, and leaves you with the feeling of inevitability. Then back to work, I guess.
Well, in this case corporations stop buying people and just fire them instead of letting them retire. Or an army of Tesla Optimi will send people to the glue factory.
That at least is the fantasy of these people. Fortunately. LLMs don't really work, Tesla cars are still built by KUKA robots (while KUKA has a fraction of Tesla's P/E) and data centers in space are a cocaine fueled dream.
> And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?
One of the many terrible things about software engineers their the tendency to think and speak as if they were some kind of aloof galaxy-brain, passively observing humanity from afar. I think that's at least partially the result of 1) identifying as an "intelligent person" and 2) computers and the internet allowing them to in-large-part become disconnected from the rest of humanity. I think they see that aloofness as being a "more intelligent" way to engage with the world, so they do it to act out their "intelligence."
I always thought intentionally applying an emotional distance was a strategy to help us see what's really happening, since allowing emotions to creep in causes us reach conclusions we want (motivated reasoning) instead of conclusions that reflect reality. I find it a valuable way to think. Then there's always the fact that the people who control the world have no emotional attachment to you either. They see you as something closer to a horse than their kin. I imagine a healthy dose of self-dehumanization actually helps us understand the current trajectory of our future. And people tend to vastly overvalue our "humanity" anyway. I'm guessing the ones that displaced horses didn't give much of a fuck about what happened to horses.
I wish I knew what you were so I could say "one of the many terrible things about __" about you. Anyway, I think you have an unhealthy emotional attachment to your emotions.
Thus strikes more in the tone of Orwell who used a muted emotional register to elicit a powerful emotional response from the reader as they realize the horror of what’s happening.
> Have you ever thought that you would see a chart showing [...]
Yes, actually, because this has been a deep vein of writing for the past 100 or more years. There's The Phools, by Stanislav Lem. There's the novels written by Boris Johnson's father that are all about depopulation. There's Aldous Huxley's Brave New World. How about Logan's Run? There has been so much writing about the automation / technology apocalypse for humans in the past 100 years that it's hard to catalog it -- much of what I have read or seen go by in the vein I've totally forgotten.
It's not remotely a surprise to see this amp up with AI.
Yeah, I am familiar with these works of art and probably most people are. However, they were mostly speculative. Now we are facing some of their premises in the real world. And the guys who push the technology in a reckless way seem to notice this, but just nod their heads and carry on.
At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
> Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population?
Yes, here's a youtube classic that put forth the same argument over a decade ago, originally titled "Humans need not apply": https://youtu.be/7Pq-S557XQU
Oh, _now_ computer industry people are worried? Kind of late to the party.
Computerization, automation and robotics, document digitization, the telecoms and wireless revolution, etc. have been upending peoples' employment on a massive scale since before the 1970s. The reaction of the technologists has been a rather insensitive "adapt or die", "go and retrain", and analogies to buggy whip manufacturers when the automobile became popular. The only reason people here suddenly give a hoot is because they think the crosshairs are drifting towards them.
It reminds me of "You maniacs! You blew it up! Goddamn you all to hell!" from the original Planet of the Apes (1968), https://youtu.be/mDLS12_a-fk?t=71
It's been a decade or so but I'm mostly called "resource" at work, as in Human Resource. Barely collegue, comrade, co-worker... just a resource, a plug in the machine that needs to be replaced by an external resource to improve profit margins.
You can kind of separate the technical side of what will likely happen - AI get smarter and can do the jobs - with how we deal with that. Could be heaven like with abundance and no one needs to work, or post apocalyptic dystopia or likely somewhere in the middle.
We collectively have a lot of choice on the how we deal with it part. I'm personally optimistic that people will vote in people friendly policies when it comes to it.
Not seeing any horse heavens, do you have reason to believe humans (i.e. those not among the ruling class) are going to have a different fate from the horses?
I agree we can kinda make the argument that abundance is soon upon us, and humanity as a whole embraces the ideas of equality and harmony etc etc... but still there's a kinda uncanny dissociation if you're happily talking about horses disappearing and humans being next while you work on the product that directly causes your prediction to come true and happen earlier...
It isn't just AI. So much of the US "Tech"/VC scene is doing outright evil stuff, with seemingly zero regard for any consequence or even a shred of self awareness.
So much money is spent on developing gambling, social media, crypto (fraud and crime enabler) and surveillance software. All of these are making people's lives worse, these companies aren't even shy about it. They want to track you, they want you to spend as much time as possible on their products, they want to make you addicted to gambling.
Just by how large these segments are, many of the people developing that software must be posting here, but I have never seen any actual reflection on it.
Sure, I guess developing software making people addicted to gambling pays the bills (and more than that), but I haven't seen even that. These industries just exist and people seem to work for them as if it was just a normal job, with zero moral implications.
My experience so far has been that the knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.
In this instance, in particular, I wouldn't expect our preferences to bear any relevance.
> knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.
I don’t know if you are intentionally being vague and existential here. However, context matters, and the predictive power is zero sounds unreasonable in the face of history.
I think humans learning that diseases were affecting us and thus leading to solutions like antibiotics and vaccines. It was not guaranteed, but I’m skeptical of the predictive power being zero.
In the US at least, there is a Congress incapable of taking action and a unilateral President fully on the side of tech CEOs with the heaviest investments in AI.
There is no evidence supporting short term optimism. Every indication the large corporations dictating public policy will treat us exactly like those horses when it comes to economic value.
I took the article as meaning white collar tech jobs that will go away, so those people will need to pivot their career, not humans.
However, it does seem like time for humanity to collectively think hard about our values and goals, and what type of world and lives we want to have in an age where human thought, and perhaps even human physical labor are economically worthless. Unfortunately this could not come at a worse time with humanity seemingly experiencing a widespread rejection of ideals like ethics, human rights, and integrity and embracing fascism and ruthless blind financial self interest as if they were high minded ideals.
Ironically, I think tech people could learn a lot here from groups like the Amish- they have clearly decided what their values and goals are, and ruthlessly make tech serve them, instead of the other way around. Despite stereotypes, Amish are often actually heavy users of, and competent with modern tech in service of making a living, but in a way that enforces firm boundaries about not letting the tech usurp their values and chosen way of life.
It was always like this. Look at the history, and sometimes quite recent - people were always treated like a tool - for getting rich, for getting in power, to conquer other countries, to serve them.
It's interesting though how the narrative is all bright-eyed idealism, make the world a better place, progress, etc until at some point the masks go off and suddenly it's "always has been, move along, nothing to see here"...
For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.
Today, the stock market and material wealth dominates. If elite dominance of the means of production requires the immiseration of most of the public, that's what we'll get.
> Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?
Not sure if by accident or not, but that’s what we are according today’s “tech elite”.
Therefore, the most profitable disposition for this dubious form of capital is to convert them into biodiesel, which can help power the Muni buses
I think we have a bunch of people in the United States who see what we elected for leadership and the choices he made to advise him, and they have given up all hope. That despondent attitude is infusing their opinions on everything. But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.
And low information business leaders will attempt to do all the awful things described here and the free market will eliminate them from the game grid one horrible boss at a time. But if you surround yourself with the AI doomers and bubblers, how will you ever encounter or even consider positive uses of the technology? What an awful place to work Anthropic must be if they truly believe they are working on the metaphorical equivalent of the Alpha Omega bomb. Spoilers: they're not.
Meanwhile, in the rest of the world, many look forward to harnessing AI to ameliorate hunger, take care of the elderly, and perform the more dangerous and tedious jobs out there. Anthropic guy needs to go get a room with Eliezer Yudkowsky. I guess the US is about get horsed by the other 96% of the planet.
Go ahead, compare me to a horse, a gasoline engine, or even call me a meatbag. Have we become little more than Eloi snowflakes to be so offended by that?
But I guess as long as an electoral majority here continues to cheer on one man draining the juice of this country down to a bitter husk, the fun and games will continue.
> But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.
At this point in time, his whimsy is the only thing holding back younger, more extreme acolytes from doing what they want. Once he's gone, lol.
Machines to “take care of the elderly” is one of the worst possible uses of this technology. We desperately need more human interaction between the old and the young, not less.
Yes. Follow in the path of the tech leaders. They are optimists. They totally aren't building doomsday bunkers or trying to build their data centers with their own nuclear power plants to remove them from society and create self contained systems. Oh wait. Crap...
Money isn't the only thing a job provides. Those are all professions that provide a sense of meaning, so monetary compensation doesn't need to be as high to attract and keep people.
I would say that it is bad when it has large derivative (positive or negative). However, the problem is not about >number of human beings< but about making agency that existing people have obsolete.
It's bad if it goes down by more than about 1.2% per year. That would mean zero births, present-day natural deaths. Of course zero births isn't presently realistic, and we should expect the next 10-30 years to significantly increase human lifespan. If we assume continued births at the lowest rates seen anywhere on the planet, and humans just maxing out the present human lifespan limit, then anything more than about a 0.5% decrease means someone is getting murked.
That depends on what you think jobs and the economy are for, generally.
If you think the purpose of the economy is for the economy to be good then it doesn't matter. If you think it exists to serve humanity then... You really wouldn't need to ask the question, I imagine.
it's a con job and strawman take. if we collectively think token generators can replace humans completely, well then we've already lost the plot as a global society
> I may have developed some kind of paranoia reading HN recently
My comments being downvoted, pretty rare lately, were about never discussed but legitimate points about AI that I validated IRL. I have no resonance about the way AI is discussed on HN and IRL, to the point that I can't rule out more or less subtle manipulation on the discussions.
I don't think that bots have taken over HN. I meant that the frontier of the tech research brags about their recklessness here and the rest of us have become bystanders to this process. Gives me goosebumps.
I wouldn't read too much into it. Anytime I post something silly and stupid, it becomes the top comment. Anytime I post something important, I get downvotes. That's just normal. I think that's just human nature...
And the votes are pretty random too. Sometimes it'll go from -5 to +10 in the span of a few hours. Just depends on who's online at the time...
And yet don't they pull on our heartstrings? Isn't that funny? A random number generator for the soul...
Honestly I can't tell if your incredulity is at the method of analysis for being tragically mistaken or superficial in some way, at the seemingly dehumanizing comparison of beloved human demonstrations of skill (chess, writing) to lowest common denominator labor, or the tone of passive indifference to computers taking over everything.
I think the comparisons are useful enough as metaphors, though I wonder at analysis, because it sounds like if someone took a Yudkowsky idea and talked about it like a human, which might make a bad assumption go down more smooth than it should. But I don't know.
I'd like to note here that the lifespan of a horse is 25-30 years. They were phased out not with mass horse genocide, but likely in the same way we phase out Toyota Corollas that have gotten too old. Owners simply didn't buy a new horse when the old one wore out, but bought an automobile instead.
Economically it is no different from the demand for Mitsubishi's decreasing except the vehicle in this case eats grass, poops, and feels pain.
If you want to analogize with humans, a gradual reduction in breeding (which is happening anyways with or without AI) is probably a stronger analogy than a Skynet extinction scenario.
Truth is this is no different than the societal trends that were introduced with industrialization, simply accelerated on a massive scale.
The threshold for getting wealth through education is bumping up against our natural human breeding timeline, delaying childbirth past natural optimal human fertility ages in the developed world. The amount of education needed to achieve certain types of wealth will move into the decades causing even more strain on fertility metrics. Some people will decide to have more kids and live off purely off whatever limited wellfare the oligarchs in charge decide is acceptable. Others will delay having children far past natural human fertility timespans or forgo having children at all.
If we look at it this way, a reduction in human population would be contingent on whether you think human beings exist and are bred for the purposes of labor.
I believe most people would agree with me that the answer is NO.
The analogy to horses here then is not individuals, but specific types of jobs.
Honestly, the answer for me is yes. I had expected it. The signs were in all the comments that take the market forces for granted. All the comments that take capitalism as a given and immutable law of nature. They were in all the tech bros that never ever wanted to change anything but the number of zeros in their bank account after a successful exit.
So yes, I had that thought you are finally having too.
I've also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes / design docs. But I don't think the horse analogy works.
It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.
What LLMs are killing is:
- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.
- tedious implementation sessions.
The vast majority of the work is still human led from what I can tell.
That sounds like a horrible onboarding experience. Human mentors provide a lot more than just answering questions, like providing context, comraderie or social skills, or even coping mechanisms. Starting a new job can be terrifying for juniors, and if their only friend is faceless chat bot...
You're right. We need to keep tabs on the culture for new hires for the reasons you mentioned. LLMs are really good at many onboarding tasks, but the social ones.
I think done right it is a superior onboarding experience. As a new hire, you no longer have to wait for your mentor to be available to learn some badly documented tech things. This is really empowering some of them. The lack of building human context / connections etc is real, and I don't think LLMs can meaningfully help there. Hence my skepticism for the horse analogy.
This sounds horrible. Onboarding should ideally be marginally about the "what". After all, we already have a very precise and non ambiguous system to tell what the system does: the code.
What I want to know when I join a company is "why" the system does what it does. Sure, give me pointers, some overview of how the code is structured, that always helps, but if you don't tell me why how am I supposed to work?
$currentCompany has the best documentation I've seen in my career. It's been spun off from a larger company, from people collaborating asynchronously and remotely whenever they had some capacity.
No matter how diligent we've been, as soon as the company started in earnest and we got people fully dedicated to it, there's been a ton of small decisions that happened during a quick call, or on a slack thread, or as a comment on a figma design.
This is the sort of "you had to be there" context the onboarding should aim to explain, and I don't see how LLMs help with that.
We are now at a point where the tech can help with both of those, today. You can have a cc session "in a loop" going through your docs / code and try to do x and y, and if it gets stuck, that's a pretty good signal that something sucks there. At least you can get a heatmap of what works ootb, and what needs more eyes.
Both questions are getting scary good answers from the latest models. Yes, I tried, on a large proprietary code base which shouldn’t be included in any training set.
The linked short story is barely 5 paragraphs long. You could have just read it instead of writing an insubstantial remark like this. It’s a fun anecdote about a famous programmer (Bill Atkinson).
Charitably I'm guessing it's supposed to be an allusion to the chart with cost per word? Which is measuring an input cost not an output value, so the criticism still doesn't quite make sense, but it's the best I can do...
So, a free idea from me: train the next coding LLM to produce not regular text, but patches which shortens code while still keeping the code working the same.
They can already do that. A few months ago I played around with the kaggle python golf competition. Got to top 50 without writing a line of code myself. Modern LLMs can take a piece of code and "golf" it. And modern harnesses (cc / codex / gemini cli) can take a task and run it in a loop if you can give them clear scores (i.e. code length) and test suites outside of their control (i.e. the solution is valid or not).
No idea why you'd want this in a normal job, but the capabilities are here.
It might be better to think about what a horse is to a human, mostly a horse is an energy slave. The history of humanity is a story about how many energy slaves are available to the average human.
In times past, the only people on earth who had their standard of living raised to a level that allowed them to cast there gaze upon the stars were the Kings and there courts, vassals, and noblemen. As time passed we have learned to make technologies that provide enough energy slaves to the common man that everyone lives a life that a king would have envied in times past.
So the question arises as to whether AI or the pursuit of AGI provides more or less energy slaves to the common man?
The big problem I see with AI is that it undermines redistribution mechanisms in a novel and dangerous way; despite industrialization, human labor was always needed to actually do anything with capital, and even people born in poverty could do work to get their share of the growing economical pie.
AI kinda breaks this; there is a real risk that human labor is going to become almost worthless this century, and this might mean that the common man ends up worse off despite nominal economic growth.
The goal is to eradicate the common man. Turns out you dont need a lot of energy, food, water, space if there aren't 8 billion humans to supply. It's the tech billionaires dream, replacing humans with robotic servants. Corporations do not care about the common man.
Sounds like any post-secondary, graduate student, or management consultant out there being there are, very often, page/word count or hours requirements. Considering the model corpora, wordiness wins out.
The chart is actually words "thought or written" so I guess they are running up the numbers even more by counting Claudes entire inner monologue, on top of what it ultimately outputs.
There was a time when these models were novel that if use it to write for me. After a year or so the verboseness and lack of personality got old. Now all I have is a decent proofreader. Maybe they'll take over my job but I'm finding the trend going the other way right now.
It's not merely cost per word, but it is even more bizarre: "cost per word thought", whatever that is. Most of these "word thoughts" from LLMs of today are just auto-completed large dumps of text.
How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don't know what it will be like in 20 months. Everything is a guesstimate.
This is the correct take. We all have that "Come to Jesus" moment eventually, where something blows our minds so profoundly that we believe anything is possible in the immediate future. I respect that, it's a great take to have and promotes a lot of discussion, but now more than ever we need concretes and definitives instead of hype machines and their adjacent counterparts.
Too much is on the line here regardless of what ultimately ends up being true or just hype.
It’s hard to filter the hot air from the realistic predictions. I’ve been hearing for over 10 years now that truck drivers are obsolete and that trucks will drive themselves. Yet today truck drivers are still very much in demand.
While in the last year I’ve seen generated images go from complete slop to indistinguishable from real photos. It’s hard to know what is right around the corner and what isn’t even close.
Probably the point is to think whether the horse or chess engine analogy is a good one. The premise being there will come a certain point when technology reaches a level that makes the alternative obselete suddenly. I don't have good reasons to think that AI will not be able to automate simple jobs with an acceptable error rate eventually, once that happens whole categories of jobs will evaporate. Probably dealing with more people type job, making excel models, transactions based, same thing day in day out, those teams may be gone and only a person or two to do a final review
Against that you have the Moore's law like predictions that AI would be getting to around human levels around now from Moravec and the like that have proved fairly spot on. I think you may find it's more like the AI chess ranking graph than the weather.
I think that you're on to something here, though I agree more with your first sentence than the second.
AI is not identical to, as the article compares, mechanical power.
But your weather-forecasting comment suggests a possible similarity (though not the one you go to): for all the millions-fold increase in compute power, and the increased density and specificity of meterological measurements, our accurate weather-forecasting window has only extended by a factor or so of two (roughly five days to ten). That is, there are applications for which vastly more information-processing capacity provides fairly modest returns.
And there are also those in which it's transformative. I'd put reusable rockets in that category, where we can now put sufficiently-reliable compute (and a whole bunch of rocket-related hardware) on a boost-phase rocket such that it can successfully soft-land.
For some years I've been thinking of the notion of technology not as some general principle ("efficiency" is the classic economics formulation), but as a set of specific mechanisms each of which has specific capabilities and limitations.[1] I've held pretty constant with nine of these:
1. Fuels. Applying more (or more useful) energy to a process.
2. Energy transmission and transformation.
3. Materials. Specific properties, abundance, costs, effects, limitations.
4. Process knowledge --- how to do things. What's generally described as "technical knowledge", here considered as a specific mechanism of technology.
5. Structural or causal knowledge --- why things work. What's generally described as "scientific knowledge".
6. Networks. Interactions between nodes via links, physical or virtual, over which matter, energy, information, or some mix flow. Transport, comms, power, information.
7. Systems. Constructs including sensing, processing, action, and feedback. Ranging from conceptual to mechanical to human and social.
8. Information. Sensing, perceiving, processing, storing, retrieving, and transmitting. Ranging from our natural senses to augmented ones, from symbolic systems (language, maths) to algorithms.
9. Hygiene. Sinks and unintended consequences, affecting the function and vitality of systems, and their mitigations or limits.
AI / AGI falls into the 8th category: information, specifically information processing. And as such, getting back to my original point, we can compare it with other information-related technological innovations: speech, writing, maths, boolean logic, switches (valves, transistors, etc.), information storage/retrieval, etc. And, yes, human thought processes. We do have some priors we can look at here, and they might help guide us in what a true AGI might be able to accomplish, and what its limitations may be.
It's often noted (including in this thread) that AGI would not presently be able to persist without copious human assistance, in that it's predicated on a vast technological infrastructure only a small portion of which it would be capable of substituting for. It's quite likely that AGI would be both competitive with and complementary to much human activity. In the horse analogy, it's worth noting that the first stage of mechanised transport development, with steam shipping and rail technology, horses were strongly complementary in that they fulfilled the last-mile delivery role which steamships and locomotives couldn't furnish. Horse drayage populations actually boomed during this period. It was development of ICE-powered lorries which finally out-competed the horse-drawn cart for intra-urban delivery. AGI-as-augmenting-humans is an already highly-utilised model, and will likely persist for some time. Experiments in AGI replacing humans will no doubt occur, some successful, others not. I'd suggest that my 9th category, hygiene, and specifically failure modes of AGI, will likely prove highly interesting.
Mechanised transport also relies heavily on fuels and/or energy storage. The past 200 or so years were predicated on nonrenewable fossil fuels, first coal then oil, and there were several points in that timeline where continued availability of cheap fuels was seriously in question. We're now reaching the point where even given abundant supply, the relatively-clean byproducts of use are proving, at scales of current use, incompatible with climatic stability, possibly extending to incompatible with advanced technological civilisation or even advanced life on Earth (again, category 9).
AGI relies on IC chip manufacture (the province of vanishingly few companies), on copious amounts of electricity, scarce physical resources, and various legal regimes concerning use of intellectual works, property, profit, and more (categories 1, 2, 3, and 7, at a minimum). Whether or not a world with pervasive AGI proves to be a stable or unstable point is another open question.
I think my software engineering job will be safe so long as big companies keep using average code as their training set. This is because the average developer creates unnecessary complexity which creates more work for me.
The way the average dev structures their code requires like 10x the number of lines as I do and at least 10x the amount of time to maintain... The interest on technical debt compounds like interest on normal debt.
Whenever I join a new project, within 6 months, I control/maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies. The code looks for the shortest path to production and creates a moat around engineers who can make their team members' jobs easier.
IMO, it's not so different to how entrepreneurship works. But with code and processes instead of money and people as your moat. I think once AI can replace top software engineers, it will be able to replace top entrepreneurs. Scary combination. We'll probably have different things to worry about then.
> Whenever I join a new project, within 6 months, I control/maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies
I am regularly tempted to do this (I have done this a few times), but unless I truly own the project (being the tech lead or something), I stop myself. One of the reasons is reluctance to trespass uninvited on someone's else territory of responsibility, even if they do a worse job than I could. The human cost of such a situation (to the project and ultimately to myself) is usually worse than the cost of living with status quo. I wonder what your thoughts are on this.
Humans don’t learn to write messy complex code. Messy, complex code is the default, writing clean code takes skill.
You’re assuming the LLM produces extra complexity because it’s mimicking human code. I think it’s more likely that LLMs output complex code because it requires less thought and planning, and LLMs are still bad at planning.
Totally agree with the first observation. The default human state seems to be confusion. I've seen this over and over in junior coders.
It's often very creative how junior devs approach problems. It's like they don't fully understand what they're doing and the code itself is part of the exploration and brainstorming process trying to find the solution as they write... Very different from how senior engineers approach coding when it's like you don't even write your first line until you have a clear high level picture of all the parts and how they will fit together.
About the second point, I've been under the impression that because LLMs are trained on average code, they infer that the bugs and architectural flaws are desirable... So if it sees your code is poorly architected, it will generate more of that poorly architected code on top. If it sees hacks in your codebase, it will assume hacks are OK and give you more hacks.
When I use an LLM on a poorly written codebase, it does very poorly and it's hard to solve any problem or implement any feature and it keeps trying to come up with nasty hacks... Very frustrating trial and error process; eats up so many tokens.
But when I use the same LLM on one of my carefully architected side projects, it usually works extremely well, never tries to hack around a problem. It's like having good code lets you tap into a different part of its training set. It's not just because your architecture is easier to build on top, but also it follows existing coding conventions better and always addresses root causes, no hacks. Its code style looks more like that of a senior dev. You need to keep the feature requests specific and short though.
You're just very opinionated. Other software engineers just give you space because they don't want to confront you and they don't want any conflict with you as it's just waste of time.
6 months is also average time it takes people like you to burn out on a project. Usually starting with relatively simple change/addition requested by customer that turns into 3 month long refactor - "because architecture is wrong". And we just let you do it, because we know fighting windmills is futile.
Depends on the size and complexity of the problem that the system is solving. For very complex problems, even the most succinct solution will be complex and not all parts of the code can be throwaway code. You have to start stacking the layers of abstractions and some code becomes critical. Like think of the Linux Kernel, you can't throw away the Linux Kernel. You can't throw away Chromium or the V8 engine... Millions of systems depend on those. If they had issues or vulnerabilities and nobody to maintain, it would be a major problem for the global economy.
Even if a throw away and replace strategy is used, eventually a system's complexity will overrun any intelligence's ability to work effectively with it. Poor engineering will cause that development velocity drop off to happen earlier.
Although it's sad, I have to agree with what you're alluding to. I think there is huge overhead and waste (in terms of money, compute resources and time) hidden in the software industry, and at the end of the day it just comes down to people not knowing how to write software.
There is a strange dynamic currently at play in the software labour market where the demand is so huge that the market can bear completely inefficient coders. Even though the difference between a good and a bad software engineer is literally orders of magnitude.
Quite a few times I encountered programmers "in the wild" - in a sauna, on the bus etc, and overheard them talking about their "stack". You know the type, node.js in a docker container. I cannot fathom the amount of money wasted at places that employ these people.
I also project that actually, if we adopt LLMs correctly, these engineers (which I would say constitute a large percentage) will disappear. The age of useless coding and infinite demand is about to come to an end. What will remain is specialist engineer positions (base infra layer, systems, hpc, games, quirky hardware, cryptographers etc). I'm actually kind of curious what the effect on salary will be for these engineers, I can see it going both ways.
If they became big companies with that "unnecessary complexity", maybe code quality does not matter as much as you want to believe. Furthermore, even the fastest or well behaved horses were replaced.
This is a fun piece... but what killed off the horses wasn't steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.
According to Wikipedia, the IC engine was invented around 1800 and only started to get somewhere in the late 1800s. Sounds like the story doesn’t change.
Sure, but if you look at more complex picture of engine development you could just as easily support the proposition that programmers are currently not in any danger (by pointing out that the qualitative differences between IC and steam engines were decisive when it comes to replacing horses, and the correct analogy is that much like a steam engine could never replace a horse, a transformer model can never replace a human).
Not detracting from the article, I think it's a fun way to shake your brain into the entirely appropriate space of "rapid change is possible"!
> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?
The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.
The equivalency here is not 9 billion versus 90 billion, it's 9 billion versus 90 million, and the question is how does the decline look? Does it look like the standard of living for everyone increasing so high that the replacement rate is in the single digit percentage range, or does it look like some version of Elysium where millions have immense wealth and billions have nothing and die off?
> No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion.
I have met some transhumanists and longtermists who would really like to see some orders of magnitude increase in the human population. Maybe they wouldn't say "tragedy", but they might say "burning imperative".
I also don't think it's clearly better for more beings to exist rather than fewer, but I just want to assure you that the full range of takes on population ethics definitely exists, and it's not simply a matter of straightforward common sense how many people (or horses) there ought to be.
Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.
The only 'line go up' graph they have left is money invested. I'm even dubious of the questions answered graph. It looks more like a feature added to internal wiki that went up in usage. Instead it's portrayed as a measure of quality or usefulness.
I think you are totally off. Individual benchmarks are not very useful on their own, but as far as I’m aware they all tell the same story of continual progress. I don’t find this surprising since it matches my experience as well.
What example do you need? In every single benchmark AI is getting better and better.
Before someone says "but benchmark doesn't reflect real world..." please name what metric you think is meaningful if not benchmark. Token consumption? OpenAI/Anthropic revenue?
Whenever I try and use a "state of the art" LLM to generate code it takes longer to get a worse result than if I just wrote the code myself from the start. That's the experience of every good dev I know. So that's my benchmark. AI benchmarks are BS marketing gimmicks designed to give the appearance of progress - there are tremendous perverse financial incentives.
This will never change because you can only use an LLM to generate code (or any other type of output) you already know how to produce and are expert at - because you can never trust the output.
AI is getting better at every benchmark. Please ignore that we're not allowed to see these benchmarks and also ignore that the companies in question are creating the benchmarks that are being exceeded.
What metrics, that aren't controlled by industry, show AI getting better? Generally curious because those "ranking sites" to me seem to be infested with venture capital, so hardly fair or unbiased. The only reports I hear from academia are those being overly negative on AI.
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
What getting high on your own supply actually looks like. These are not the types of questions most people have or need answered. It's unique to the hiring process and the nascent status of the technology. It seems insane to stretch this logic to literally any other arena.
On top of that horses were initially replaced with _stationary_ gasoline engines. Horses:Cars is an invalid view into the historical scenario.
I think it's a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.
And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?
> the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative
This is an assumption for the best-case scenario, but I think you could also just take the marginal case. Steady progress builds until you get past the state of the art system, and then the switch becomes easy to justify.
"In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
And not very long after, 93 per cent of those horses had disappeared.
I very much hope we'll get the two decades that horses did."
I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.
So that recreational existence at the leisure of our own machinery seems like an optional future humans can hope for too.
Turns out the chart is about farm horses only as counted by the USDA not including any recreational horses. So this is more about agricultural machinery vs. horses, not passenger cars.
---
City horses (the ones replaced by cars and trucks) were nearly extinct by 1930 already.
City horses were formerly almost exclusively bred on farms but because of their practical disappearance such breeding is no longer necessary. They have declined in numbers from 3,500,000 in 1910 to a few
hundred thousand in 1930.
My reading of tfa is exactly that - the author is hoping that we'll have at least a generation or so to adapt, like horses did, but is concerned that it might be significantly more rapid.
"You're absolutely right!" Thanks for pointing it out. I was expecting that kind of perspective when the author brought up horses, but found the conclusion to be odd. Turns out it was just my reading of it.
In that analogy "someone" is an AI, who of course switches from answering questions from humans, to answering questions from other AIs, because the demand is 10x.
> Governments have typically expected efficiency gains to lower resource consumption, rather than anticipating possible increases due to the Jevons paradox
I think that it's true that governments want the efficiency gains but it's false that they don't anticipate the consumption increases. Nobody is spending trillions on datacenters without knowing that demand will increase, that doesn't mean we shouldn't make them efficient.
This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".
Whether people are interchangeable with each other isn't the point. The point is whether AI is interchangeable with jobs currently done by humans. Unless and until AI training requires 1000 different domain experts, the current projection is that at some point AI will be interchangeable with all kinds of humans...
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
> … Six months later, 80% of the questions I'd been being asked had disappeared.
Interesting implications for how to train juniors in a remote company, or in general:
> We find that sitting
near teammates increases coding feedback by 18.3% and improves code quality. Gains
are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when
sitting near colleagues.
This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:
1. The release of Claude Code in February
2. The release of Opus 4.5 two weeks ago
In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.
Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.
Aren't you guys looking forward to the day when we get the opportunity to go the way of all those horses? You should! I'm optimistic; I think I'd make a fine pot of glue.
Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of "good/cheap enough" where it displaces the previous best option.
And roads, and other auto-friendly (or auto-dependent) infrastructure and urban / national land-use.
Cars went from a luxury to a necessity, though largely not until after WWII in the US, and somewhat later in other parts of the world.
There remain areas where a car is not required, or even a burden. NYC, and a few major metropolitan regions, as well as poorer parts of the world (though motorcycles and mopeds are often prevalent there).
It’s both. A steam engine at 2% efficiency is good only for digging up more coal for itself, and barely so. Completely different story at 20%. Every doubling is a step function in some area as it becomes energetically and economically rational to use it for something.
What exactly does specifically engine efficiency have to do with horse usage? Cars like the Ford Model T entered mass production somewhere around 1908. Oh, and would you look at the horse usage graph around that date! sigh
The chess ranking graph seems to be just a linear relationship?
> This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.
>> Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.
So more == better. sigh. Ran any, you know, studies to see the quality of those answers? I too can consult /dev/random for answers at a rate of gigabytes per second!
> I was one of the first researchers hired at Anthropic.
Yeah. I can tell. Somebody's high on their own supply here.
funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.
plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.
even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.
so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.
To give backing i’m from Australia which has ~2.5x the median wealth per capita of US citizens but a lower average wealth. This shows through in the wealth of a typical citizen. Less homelessness, better living standards (hdi in australia is higher) etc.
This is a recent development where the median wealth of citizens in progressively taxes nations has quickly overtaken the median wealth of USA citizens.
All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right? Yet things gave consistently been going the other way for along time in the USA.
I think by the time the wealthy realize they're setting themselves up for the local equivalent of the French Revolution it will be a bit late. It's a really bad idea to create a large number of people with absolutely nothing to lose.
> All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right?
You could tax 100% of all of the top 1%'s income (not progressively, just a flat 100% tax) and it'd cover less than double the federal government's budget deficit in the US. There would be just enough left over to pay for making the covid 19 ACA subsidies permanent and a few other pet projects.
Of course, you can't actually tax 100% of their income. In fact, you'd need higher taxes on the top 10% than anywhere else in the West to cover the deficit, significantly expand social programs to have an impact, and lower taxes on the middle class.
It should be pointed out that Australia has higher taxes on their middle class than the US does. It tops out at 45% (plus 2% for medicare) for anyone at $190k or above.
If you live in New York City, and you're in the top 1% of income earners (taking cash salary rather than equity options) you're looking at a federal tax rate of 37%, a state tax rate of 10.9%, and a city income tax rate of 3.876% for a total of 51.77%. Some other states have similarly high tax brackets, others are less, and others yet use other schemes like no income tax but higher sales and property taxes.
Without usa the way it is, Australia would be much less prosperous. From the perspective of employers and consumers, labor costs are the same. It’s just that in Europe and Australia, taxes are a larger percentage of cost of labor.
Those are all expensive because of artificial barriers meant to keep their prices high. Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.
Tech and AI have taken off in the US partially because they’re in the domain of software, which hasnt bee regulated to the point of deliberate inefficiency like
other industries in the US.
If we had less regulation of insurance companies, do you think they’d be cheaper?
(I pick this example because our regulation of insurance companies has (unintuitively) incentivized them to pay more for care. So it’s an example of poor regulation imo)
> Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.
What do you mean? Several Asian cities have housing crises far worse than the US in local purchasing power, and I'd even argue that a "cheap" home in many Asian countries is going to be of a far lower quality than a "cheap" home in the US.
you mean the same Asia that has the same problem? USA enjoying arbitrage is not actually a solution nor is it sustainable. not to mention that if you control for certain things, like house size for instance relative to inflation adjusted income it isn't actually much different despite popular belief.
It would be kinda funny if not so tragic how economists will argue both "[productive improvement] will make things cheaper" and then in the next breath "deflation is bad and must be avoided at all costs"
>in the real world are more expensive: health care, housing, cars.
Think of it another way. It's not that these things are more expensive. It's that the average US worker simply doesn't provide anything of value. China provides the things of value now. How the government corrected for this was to flood the economy with cash. So it looks like things got more expensive, when really it's that wages reduced to match reality. US citizens selling each other lattes back and forth, producing nothing of actual value. US companies bleeding people dry with fees. The final straw was an old man uniting the world against the USA instead of against China.
If you want to know where this is going, look at Britain: the previous world super power. Britain governed far more of the earth than the USA ever did, and now look at it. Now the only thing it produces is ASBOs. I suppose it also sells weapons to dictators and provides banking to them. That is the USA's future.
Yep. My grandma bought her house in ~1962 for $20k working at a factory making $2/hr. Her mortgage was $100/m; about 1 weeks worth of pay. $2/hr then is the equivalent of ~$21/hr today.
If you were to buy that same house today, your mortgage would be about $5100/m-- about 6 weeks of pay.
And the reason is exactly what you're saying: the average US worker doesn't provide as much value anymore. Just as her factory job got optimized/automated, AI is going to do the same for many. Tech workers were expensive for a while and now they're not. The problem is that there seems to be less and less opportunity where one can bring value. The only true winners are the factory owners and AI providers in this scenario. The only chance anybody has right now is to cut the middleman out, start their own business, and pray it takes off.
But the us is China's market, so the ccp goes along even though they are the producer. Because a domestic consumer economy would mean sharing the profits of that manufacturing with the workers. But that would create a middle class not dependent on the party leading (at least in their minds, and perhaps not wrongly) to instability. It is a dance of two, and neither can afford to let go. And neither can keep dancing any longer. I think it will be very bad everywhere.
Well, politically, housing becoming cheaper is considered a failure. And this is true for all ages. As an example, take Reddit. Skews younger, more Democrat-voting, etc. You'd think they'd be for lower housing prices. But not really. In fact, they make fun of states like Texas whose cities act to allow housing to become cheaper: https://www.reddit.com/r/LeopardsAteMyFace/comments/1nw4ef9/...
That's just an example, but the pattern will easily repeat. One thing that came out of the post-pandemic era is that the lowest deciles saw the biggest rises in income. Consequently, things like Doordash became more expensive, and stuff like McDonald's stopped staffing as much.
This isn't some grand secret, but most Americans who post on Twitter, HN, or Reddit consider the results some kind of tragedy, though it is the natural thing that happens when people become much higher income: you can't hire many of them to do low-productivity jobs like bus a McD's table.
That's what life looks like when others get richer relative to you. You can't consume the fruits of their labor for cheap. And they will compete for you with the things that you decided to place supply controls on. The highly-educated downwardly-mobile see this most acutely, which is why you see it commonly among the educated children of the past elite.
Thank you, I've replied too many times that if people want low priced housing, it's easily found in Texas. The replies are empty or stating that they don't want to live there because... it's Texas.
So the young want cheap affordable housing, right in the middle of Manhattan, never going to happen.
Housing is a funny old one and speaks to it being a human problem. One thing a lot of people dont truly engage with with the housing issue is that its a massive issue of distribution. Too many people want to live in too few places. Yes, central banks & interest rates (being too low and also now being relatively too high), nimbyism, and rent seeking play an important role too but solving the "too many people live in too few places" issue actually fixes that problem (slowly, and possibly unpalatably slow for some, but a fix nonetheless)
The key issue upstream is that too many good jobs are concentrated in too few places, and that leads to consumerism stimulating those places and making them further more attractive. Technology, through Covid, actually gave governments a get out of jail free card by allowing remote work to become more mainstream. Only to just not grasp the golden egg they were given. Pivot economies more to remote working more actively helps distribute people to other places with more affordable home. Over time, and again slowly, those places become more attractive because people now actually live there.
Existing homeowners can still wrap themselves in the warm glow of their high house prices which only loses "real" value through inflation which people tend not to notice as much.
But we decided to try to go back to the status quo so oh well
- House prices increasing while wages are stagnant
- Home loans and increasing prices mean the people going for huge leverages on their home purchases
- Supply is essentially government controlled, and dependent, and building more housing is heavily politicized
- A lot of dubious money is being created, which gets converted to good money by investing it in the housing market
- Housing is genuinely difficult to build and labor and capital intensive
> The key issue upstream is that too many good jobs are concentrated in too few places
This no longer is the case with remote work on the rise, If that were the case, housing prices would increase faster in trendy overpriced places, but the increase as of late was more uniform, with places like London growing slower (or even depreciating, relatively speaking) to less in-demand places.
Food and clothes are much cheaper. People used to have to walk or hitchhike a lot more. People died younger, or were trapped with abusive spouses and/or parents. Crime was high. There was little economic mobility. It really sucked if you weren’t a straight white man. Houses had one bathroom. Power went out regularly. Travel was rare and expensive; people rarely flew anywhere. There was limited entertainment or opportunities to learn about the world.
yeah that my question to the author too - if A.I is to really earn its keep it means A.I should help in getting more physical products into people's hands & helping with producing more energy.
physical products & energy are the two things that are relevant to people's wellbeing.
right now A.I is sucking up the energy & the RAM - so is it gonna translate into a net positive ?
It's inflation, simple as that. The US left the gold standard at the exact same time that productivity diverged from wages. Coincidence? No.
Pretty much everything gets more expensive, with the outliers being tech which has gotten much cheaper, mostly because the rate at which it progresses is faster than the rate at which governments can print money. But everything we need to survive, like food, housing, etc, keeps getting more expensive. And the asset class get richer as a result.
AI currently lacks the following to really gain a "G" and reliably be able to replace humans at scale:
- Radical massive multimodality. We perceive the world through many wide-band high-def channels of information. Computer perception is nowhere near. Same for ability to "mutate" the physical world, not just "read" it.
- Being able to be fine-tuned constantly (learn things, remember things) without "collapsing". Generally having a smooth transition between the context window and the weights, rather than fundamental irreconcilable difference.
These are very difficult problems. But I agree with the author that the engine is in the works and the horses should stay vigilant.
The work done by horses was not the only work out there. Games played by chess masters was not the only sport on the planet. Answering questions and generating content is not the only work that happens at work places.
This makes me think of another domain where it could happen: electricity generation and distribution. If solar+battery becomes cheap enough we could see the demise of the country-scale grid.
I think the author's point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of "good enough" in that particular field.
I'm willing to believe the hype on LLMs except that I don't see any tiny 1-senior-dev-plus-agents companies disrupting the market. Maybe it just hasn't happened "yet"... But I've been kind of wondering the same thing for most of 2025.
And what happened to human population? It skyrocketed. So humans are going to get replaced by AI and human population will skyrocket again? This analogy doesn't work.
> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
I really doubt horses would be ambivalent about this, let alone about anything. Or maybe I'm wrong, they were in two minds: oh dear I'm at risk of being put to sleep, or maybe it could lead to a nice long retirement out on a grassy meadow. But they're in all likelihood blissfully unaware.
This post is kind of sad. It feels like he's advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.
Indeed. I do wonder if the inventors of the "transformer architecture" knew all the potential Pandora's boxes they were opening when they invented it. Probably not.
No one wants to say the scary potential logical conclusion of replacing the last value that humans have a competitive advantage in; that being intelligence and cognition. For example there is one future scenario of humanity where only the capital and resource holders survive; the middle and lower classes become surplus to requirements and lose any power. Its already happening slowly via inflation and higher asset prices after all - it is a very real possibility. I don't think a revolution will be possible in this scenario; with AI and robotics the rich could outnumber pretty much everyone.
AI seems capable of doing lots of things, particularly in comparison to domain-specific programming or even domain-specific AI. Your critique doesn't seem so powerful as you might suppose.
Capable yes, but human equivalence comes at different times, which means AI human equivalnce to humans in general will be staggered, and not a sudden cliff as the author claims. But in all fairness I don't imagine this to be a powerful critique, I wouldn't be at all shocked if I'm wrong.
I mean it's hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don't know how we could watch current AI and anyone can say we have that.
The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.
Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.
Why a human in a box and not an android? A lot of jobs will require advanced robotics to fully automate. And then there are jobs where customer preference is for human interaction or human entertainment. It's like how superior chess engines have not reduced the profession of chess grandmasters, because people remain more interested in human chess competition.
The assumption is superhuman AGI or a stronger ASI could invent anything it needed really fast, so ASI means intelligent robots within years or months, depending on manufacturing capabilities.
LLMs can only hallucinate and cannot reason or provide answers outside of their training set distribution. The architecture needs to fundamentally change in order to reach human equivalence, no matter how many benchmarks they appear to hit.
The sometimes stumble and hallucinate out of distribution. It’s rare, it’s more rare that is actually a good hallucination, but we’ve figured out how to enrich uranium, after all.
If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to "replace people" outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don't understand the steps needed to get to the highly existential threat that has become a cliché in every "Learn CLAUDE/MCP" ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?
Capitalism gives, capitalism takes. Regulation will be critical so it doesn’t take too much, but tech is moving so fast even technologists, enthusiasts and domain researchers don’t know what to expect.
Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.
This is the context wherein the valuation of AI companies makes sense, particularly those that already got a head start and have captured a large swath of that market.
Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo's David, but cyberpunk.
Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.
Now LLMs, what is their purpose? What is the purpose of a human?
I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).
They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.
There is a TV movie In Pursuit of Honor (1995) claiming to be based on true events. My short search online states that such things were never really documented, but it's plausible that there were similar things happening.
> In Pursuit of Honor is a 1995 American made-for-cable Western film directed by Ken Olin. Don Johnson stars as a member of a United States Cavalry detachment refusing to slaughter its horses after being ordered to do so by General Douglas MacArthur. The movie follows the plight of the officers as they attempt to save the animals that the Army no longer needs as it modernizes toward a mechanized military.
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.
Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.
I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.
Meanwhile Google, apart from perhaps Kilpatrick, is just silent.
At this point "we're going to make all office work obsolete" feels more like a marketing technique than anything actually connected to reality. It's sort of like how Coca-Cola implies that drinking their stuff will make you popular and well-liked by other attractive, popular people.
Meanwhile, my own office is buried in busywork that there are currently no AI tools on the market that will do the work for us, and AI entering a space sometimes increases busywork workloads. For example, when writing descriptions of publications or listings for online sales, we have to put more effort now into not sounding like it was AI-generated or we will lose sales. The AI tools for writing descriptions / generating listings are not very helpful either. (An inaccurate listing/description is a nightmare.)
I was able to help set up a client with AI tools to help him generate basically a faux website in a few hours that has lots of nice graphic design, images, etc. so that his new venture looks like a real company. Well, except for the "About Us" page that hallucinated an executive team plus a staff of half a dozen employees. So I guess work like that does get done faster now.
Yawn, another article which hand picks success stories. What about the failures? Where's the graph of flying cars? Humanoid house servant robots? 3D TVs? Crypto decentralized banking for everyone? Etc.
Anybody who tells you they can predict the future is shoveling shit in his mouth then smiling brown teeth at the audience. 10 years from now there's a real possibility of "AI" being remembered as that "stuff that almost got to a single 9 reliability but stopped there".
I've never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, "Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor" could just be a throwaway thought, and wouldn't serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones's post here is probably 99.9% napkin math / AI output, but I do feel like it adds to the argument without distracting from it.
(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)
This is another one of those apocalyptic posts about AI. It might actually be true. I recommend reading The Phools, by Stanislav Lem -- it's a very short story, and you can find free copies of it online.
Also maybe go out for some fresh air. Maybe knowledge work will go down for humans, but plumbing and such will take much longer since we'll need dextrous robots.
You know, this whole conversation reminds me of that old critique on Communism; Once the government becomes so large and encompassing, it reaches a point where it no longer needs to the people to exist, thus, people are culled by the millions, as they are simply no longer needed.
Truly depressing to see blasé predictions of AI infra spending approaching WW2 levels of GDP as if that were remotely desirable. One, that’s never going to happen, but if it does, it’ll mean a complete failure to address actual human needs. The amount of money wasted by Facebook on the Metaverse could have ended homelessness in the US, or provided universal college. Now here we are watching multiple times that much money get thrown by Meta, Google, et al into datacenters that are mostly generating slop that’s ruining what’s left of the Internet.
It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.
I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.
I don't think is the case; I think what's actually going on is that the HN crowd are the people who are stuck actually trying to use AI tools and aware of their limitations.
I have noticed, however, that people who are either not programmers or who are not very good programmers report that they can derive a lot of benefit from AI tools, since now they can make simple programs and get them to work. The most common use case seems to be some kind of CRUD app. It's very understandable this seems revolutionary for people who formerly couldn't make programs at all.
For those of us who are busy trying to deliver what we've promised customers we can do, I find I get far less use out of AI tools than I wish I did. In our business we really do not have the budget to add another senior software engineer, and we don't the spare management/mentor/team lead capacity to take on another intern or junior. So we're really positioned to be taking advantage of all these promises I keep hearing about AI, but in practical terms, it saves me at an architect or staff level maybe 10% of my time and for one of our seniors maybe 5%.
So I end up being a little dismissive when I hear that AI is going to become 80% of GDP and will be completely automating absolutely everything, when what I actually spend my day on is the same-old same-old of trying to get some vendor framework to do what I want to get some sensor data out of their equipment and deliver apps to end customers that use enough of my own infrastructure that they don't require $2,000 a month of cloud hosting services per user. (I picked that example since at one customer, that's what we were brought in to replace: that kind of cost simply doesn't scale.)
I value this comment even though I don't really agree about how useful AI is. I recognise in myself that my aversion to AI is at least partly driven by fear of it taking my job.
I worked for a company that was starting to shove AI incentives down the throat of every engineer as our product got consistently worse and worse due to layoffs and the perceived benefits of AI which were never realized. When you look at the companies that have shifted to 'AI first' and see them shoveling out garbage that barely works, it should be no surprised that people both aware of how the sausage is made and not are starting to hate it.
I think AI tools are great, and I use them daily and know their limits. Your view is commonly held by management or execs who don't have their boots on the ground.
That's what I've observed. I currently have more work booked than I can reasonably get done in the next year, and my customers would be really delighted if I could deliver it to them sooner, and take on even more projects. But I have yet to find any way that just adding AI tools to the mix makes us orders-of-magnitude better. The most I've been able to squeeze out is a 5% to 10% increase.
I’m not anti-AI; I use it every day. But I also think all this hand-wringing is overblown and unbalanced. LLMs, because of what they are, will never replace a thoughtful engineer. If you’re writing code for a living at the level of an LLM then your job was probably already expendable before LLMs showed up.
except you know, you had a job. and coming out of college could get one… if you were graduating right now in compsci you’ll find a wasteland with no end in sight…
But the temptation of easy ideas cuts both ways. "Oldsters hate change" is a blanket dismissal, and there are legitimate concerns in that body of comments.
>It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
I don't think you can characterise it as a sentiment of the community as a whole. While every AI thread seems to have it's share of AI detractors, the usernames of the posters are becoming familiar. I think it might be more accurate to say that there is a very active subset of users with that opinion.
This might hold true for the discourse in the wider community. You see a lot of coverage about artists outraged by AI, but when I speak to artists they have a much more moderate opinion. Cautious, but intrigued. A good number of them are looking forward to a world that embraces more ambitious creativity. If AI can replicate things within a standard deviation of the mean, the abundance of that content there will create an appetite for something further out.
Horses eat feed. Cars eat gasoline. LLMs eat electricity, and progress may even now be finding its limits in that arena. Besides the fact that just more compute and context size aren’t the right kind of progress. LLMs aren’t coming for your job any more than computer vision is, for a lot of reasons, but I’ll list two more:
"The only reason to reduce headcount is to remove people who already weren’t providing much value."
I wish corporations really acted this rationally.
At least where I live hospitals fired most secretaries and assistants to doctors a long time ago. The end result? High-paid doctors spending significant portion of their time on administrative and bureaucratic tasks that were previously handled by those secretaries, preventing them from seeing as many patients as they otherwise would. Cost savings may look good on spreadsheet, but really the overall efficiency of the system suffered.
That's what I see when companies cut juniors as well. AI cannot replace a junior because a junior has full and complete agency, accountability, and purpose. They retain learning and become a sharper bespoke resource for the business as time goes on. The PM tells them what to do and I give them guidance.
If you take away the juniors, you are now asking your seniors to do that work instead which is more expensive and wasteful. The PM cannot tell the AI junior what to do for they don't know how. Then you say, hey we also want you to babysit the LLM to increase productivity, well I can't leave a task with the LLM and come back to it tomorrow. Now I am wasting two types of time.
6 replies →
But wouldnt these spreadsheets be tracking something like total revenue? If a doctor is spending time on admin tasks instead of revenue-generating procedures, obviously the hospital has accountants and analysts who will notice this, yes?
I'll contrast your experience with a well-run (from a profitability standpoint) dentist's office, they have tons of assistants and hygienists and the dentist just goes from room-to-room performing high-dollar procedures, and very little "patient care." If small dentist offices have this all figured out it seems a little strange that a massive hospital does not.
8 replies →
I'm a full-stack developer, Recently i find that almost 90% of my work deadlines have been brought forward, and the bosses' scheduling has become stricter. the coworker who is particularly good at pair programming with AI prefers to reduce his/her scheduling(kind of unconsciously)。Work is sudden,but salary remains steady。what a bummer
Disagreed. You need more doctors, not useless secretaries. Generating bureaucratic bullshit doesn't make any work go faster; it actually just creates more work at best and, in general, just slows everything down.
It is perfect that the primary stakeholder is responsible of his own bureaucratic impact. This way he'll learn to generate the minimum amount that is viable to be efficient. Otherwise they don't care and generate waste by the metric ton.
Because of the French hospital bureaucratic nightmare, for a simple 15-minute intervention (cyst removal), I had 2 appointments and received 4 different letters by post. Not only did they waste more of my time than necessary (every time you need to wait about 45 minutes before anything happens), but since the physician cannot be duplicated and I had to meet him each time, nothing of value was gained as well.
With modern technologies, secretaries should barely exist. They still do because it's all about the laws and compliance; everyone is protecting his ass first and foremost. Without this, a system without the bureaucracy would be much more efficient. It's how they do it outside the western world basically.
Funny the original post doesn’t mention AI replacing the coding part of his job.
There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
I want to be be optimistic. But it’s hard to ignore what I’m doing and seeing. As far as I can tell, we haven’t hit serious unemployment yet because of momentum and slow adoption.
I’m not replying to argue, I hope you are right. But I look around and can’t shake the feeling of Wile E. Coyote hanging in midair waiting for gravity to kick in.
>There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
Yes, it’s a god of the gaps situation. We don’t know what the ceiling is. We might have hit it, there might be a giant leap forward ahead, we might leap back (if there is a rug pull).
The most interesting questions are the ones that assume human equivalency.
Suppose an AI can produce like a human.
Are you ok with merging that code without human review?
Are you ok with having a codebase that is effectively a black box?
Are you ok with no human being responsible for how the codebase works, or able to take the reins if something changes?
Are you ok with being dependent on the company providing this code generation?
Are we collectively ok with the eventual loss of human skills, as our talents rust and the new generation doesn’t learn them?
Will we be ok if the well of public technical discussion LLMs are feeding from dries up?
Those are the interesting debates I think.
7 replies →
I predict by March 2026, AI will be better at writing doomer articles about humans being replaced than top human experts.
1 reply →
Well, I would just say to take into account the fact that we're starting to see LLMs be responsible for substantial electricity use, to the point that AI companies are lobbying for (significant) added capacity. And remember that we're all getting these sub-optimal toys at such a steep discount that it would be price gouging if everyone weren't doing it.
Basically, there's an upper limit even to how much we can get out of the LLMs we have, and it's more expensive than it seems to be.
Not to mention, poorly-functioning software companies won't be made any better by AI. Right now there's a lot of hype behind AI, but IMO it's very much an "emperor has no clothes" sort of situation. We're all just waiting for someone important enough to admit it.
I’m deeply sceptical. Every time a major announcement comes out saying so-and-so model is now a triple Ph.D programming triathlon winner, I try using it. Every time it’s the same - super fast code generation, until suddenly staggering hallucinations.
If anything the quality has gotten worse, because the models are now so good at lying when they don’t know it’s really hard to review. Is this a safe way to make that syscall? Is the lock structuring here really deadlock safe? The model will tell you with complete confidence its code is perfect, and it’ll either be right or lying, it never says “I don’t know”.
Every time OpenAI or Anthropic or Google announce a “stratospheric leap forward” and I go back and try and find it’s the same, I become more convinced that the lying is structural somehow, that the architecture they have is not fundamentally able to capture “I need to solve the problem I’m being asked to solve” instead of “I need to produce tokens that are likely to come after these other tokens”.
The tool is incredible, I use it constantly, but only for things where truth is irrelevant, or where I can easily verify the answer. So far I have found programming, other than trivial tasks and greenfield ”write some code that does x”, much faster without LLMs
5 replies →
idk man, I work at a big consultant company and all I'm hearing is dozens of people coming out of their project teams like, "yea im dying to work with AI, all we're doing is talking about with clients"
It's like everyone knows it is super cool but nobody has really cracked the code for what it's economic value truly, truly is yet
It’s first showing up in high unemployment for graduating college students.
> There seems to be a running theme of “okay but what about” in every discussion that involves AI replacing jobs. Meanwhile a little time goes by and “poof” AI is handling it.
Any sources on that? Except for some big tech companies I dont see that happening at all. While not empirical most devs I know try to avoid it like the plague. I cant imagine that many devs actually jumped on the hype train to replace themselves...
2 replies →
> The only reason to reduce headcount is to remove people who already weren’t providing much value.
There were many secretaries up until the late 20th century that took dictation, either writing notes of what they were told or from a recording, then they typed it out and distributed memos. At first, there were many people typing, then later mimeograph machines took away some of those jobs, then copying machines made that faster, then printers reduced the need for the manual copying, then email reduced the need to print something out, and now instant messaging reduces email clutter and keep messages shorter.
All along that timeline there were fewer and fewer people involved, all for the valuable task of communication. While they may not have held these people in high esteem, they were critical for getting things done and scaling.
I’m not saying LLMs are perfect or will replace every job. They make mistakes, and they always will; it’s part of what they are. But, as useful as people are today, the roles we serve in will go away and be replaced by something else, even if it’s just to indicate at various times during the day what is or isn’t pleasing.
The thing that replaces the old memos is not email, its meetings. It not uncommon for meetings with hundreds of participants that in the past would be a simple memo.
It would be amazing if LLMs could replace the role that meetings has in communication, but somehow I strongly doubt that will happens. It is a fun idea to have my AI talk with your AI so no one need to actually communicate, but the result is more likely to create barriers for communication than to help it.
1 reply →
The crucial observation is the fact that automation has historically been a net creator of jobs, not destroyer.
9 replies →
> At first, there were many people typing, then later [...]
There were more people typing than ever before? Look around you, we're all typing all day long.
1 reply →
This is a very insightful take. People forget that there is competition between corporations and nations that drives an arms race. The humans at risk of job displacement are the ones who lack the skill and experience to oversee the robots. But if one company/nation has a workforce that is effectively 1000x, then the next company/nation needs to compete. The companies/countries that retire their humans and try to automate everything will be out-competed by companies/countries that use humans and robots together to maximum effect.
Overseeing robot is a time limited activity. Even building robot has a finite horizon.
Current tech can't yet replace everything but many jobs already see the horizon or are at sunset.
Last few time this happened the new tech, whether textile mills or computers, drove job creation as well as replacement.
This time around some component of progress are visibile, because end of the day people can use this tech to create wealth at unprecedented scale, but other arent as the tech is run with small teams at large scale and has virtually no related ondustries is depends on like idk cars would. It's energy and gpus.
Maybe we will be all working on gpu related industries? But seems another small team high scale job. Maybe few tens of million can be employed there?
Meanwhile I just dont see the designer + AI job role materializing, I see corpos using AI and cutting the middleman, while designers + AI get mostly ostracized, unable to raise, like a cran in a bucket of crabs.
5 replies →
I think you’ve missed the point. Cars replaced horses - it wasn’t cars+horses that won. Computers replaced humans as the best chess players, not computers with human oversight. If successful, the end state is full automation because it’s strictly superhuman and scales way more easily.
35 replies →
I think the big problem here though, is that humans go from being mandatory to being optional, and this changes the competitive landscape between employers and workers.
In the past a strike mattered. With robots, it may have to go on for years to matter.
4 replies →
Why not just have the robots oversee the robots?
> most companies will still have more work to do than resources to assign to those tasks
This is very important yet rarely talked about. Having worked in a well-run group on a very successful product I could see that no matter how many people would be on a project there was alway too much work. And always too many projects. I am no longer with the company but I can see some of the ideas talked about back then being launched now, many years later. For a complex product there is always more to do and AI would simply accelerate development.
Yip, the famous example here being John Maynard Keynes, of Keynesian economics. [1] He predicted a 15 hour work week following productivity gains that we have long since surpassed. And not only did he think we'd have a 15 hour work week, he felt that it'd be mostly voluntary - with people working that much only to give themselves a sense of purpose and accomplishment.
Instead our productivity went way above anything he could imagine, yet there was no radical shift in labor. We just instead started making billionaires by the thousand, and soon enough we can add trillionaires. He underestimated how many people were willing to designate the pursuit of wealth as the meaning of life itself.
[1] - https://en.wikipedia.org/wiki/Keynesian_economics
Productivity gains are more likely to be used to increase margins (profits and therefore value to shareholders) then it is to reduce work hours.
At least since the Industrial Revolution, and probably before, the only advance that has led to shorter work weeks is unions and worker protections. Not technology.
Technology may create more surplus (food, goods, etc) but there’s no guarantee what form that surplus will reach workers as, if it does at all.
13 replies →
In the same essay ("Economic Possibilities for our Grandchildren," 1930) where he predicted the 15-hour workweek, Keynes wrote about how future generations would view the hoarding of money for money's sake as criminally insane.
"There are changes in other spheres too which we must expect to come. When the accumulation of wealth is no longer of high social importance, there will be great changes in the code of morals. We shall be able to rid ourselves of many of the pseudo-moral principles which have hag-ridden us for two hundred years, by which we have exalted some of the most distasteful of human qualities into the position of the highest virtues. We shall be able to afford to dare to assess the money-motive at its true value. The love of money as a possession – as distinguished from the love of money as a means to the enjoyments and realities of life – will be recognised for what it is, a somewhat disgusting morbidity, one of those semi-criminal, semi-pathological propensities which one hands over with a shudder to the specialists in mental disease. All kinds of social customs and economic practices, affecting the distribution of wealth and of economic rewards and penalties, which we now maintain at all costs, however distasteful and unjust they may be in themselves, because they are tremendously useful in promoting the accumulation of capital, we shall then be free, at last, to discard."
7 replies →
> We just instead started making billionaires by the thousand, and soon enough we can add trillionaires.
Didn’t we also get standards of living much higher than he would ever imagine? I think blaming everything on billionaires is really misguided and shallow.
1 reply →
> We just instead started making billionaires by the thousand, and soon enough we can add trillionaires.
We just instead started doing Bullshit Jobs. https://en.wikipedia.org/wiki/Bullshit_Jobs
I feel like this sort of misses the point. I didn't think the primary thrust of his article was so much about the specific details of AI, or what kind of tasks AI can now surpass humans on. I think it was more of a general analysis (and very well written IMO) that even when new technologies advance in a slow, linear progression, the point at which they overtake an earlier technology (or "horses" in this case), happens very quickly - it's the tipping point at which the old tech surpasses the new. For some reason I thought of Hemingway's old adage "How did you go bankrupt? - Slowly at first, then all at once."
I agree with all the limitations you've written about the current state of AI and LLMs. But the fact is that the tech behind AI and LLMs never really gets worse. I also agree that just scaling and more compute will probably be a dead end, but that doesn't mean that I don't think that progress will still happen even when/if those barriers are broadly realized.
Unless you really believe human brains have some sort of "secret special sauce" (and, FWIW, I think it's possible - the ability of consciousness/sentience to arise from "dumb matter" is something that I don't think scientists have adequately explained or even really theorized), the steady progress of AI should, eventually, surpass human capabilities, and when it does, it will happen "all at once".
For what it's worth, the decline in use of horses was much slower than you might expect. The model T Ford motor car reached peak production in 1925 [0], and for an inexact comparison (I couldn't find numbers for the US) the horse population of France started to decline in 1935, but didn't drop below 80% of its historical peak until the late 1940's down to 10% of its peak by the 1970's [1].
[0] https://en.wikipedia.org/wiki/Ford_Model_T#Mass_production
[1] https://pmc.ncbi.nlm.nih.gov/articles/PMC7023172/
6 replies →
If there’s more work than resources, then is that low value work or is there a reason the business is unable to increase resources? AI as a race to the bottom may be productive but not sure it will be societally good.
Not low-value or it just wouldn't be on the board. Lower value? Maybe, but there are many, many reasons things get pushed down the backlog. As many reasons as there are kinds of companies. Most people don't work at one of the big tech companies where work priorities and business value are so stratified. There are businesses that experience seasonality, so many of the R&D activities get put on the backburner until the busy season is over. There are businesses that have high correctness standards, where bigger changes require more scrutiny, are harder to fit into a sprint, and end up getting passed over for smaller tasks. And some businesses just require a lot of contextual knowledge. I wouldn't trust an AI to do a payroll calculation or tabulate votes, for instance, any more than I would trust a brand new employee to dive into the deep end on those tasks.
Most of corporate people dont provide direct value…
> 1. Even if LLMs made everyone 10x as productive, most companies will still have more work to do than resources to assign to those tasks. The only reason to reduce headcount is to remove people who already weren’t providing much value.
They have more work to do until they don't.
The number of bank tellers went up for a while after the invention of the ATM, but then it went down, because all the demand was saturated.
We still need food, farming hasn't stopped being a thing, nevertheless we went from 80-95% of us working in agriculture and fishing to about 1-5%, and even with just those percentages working in that sector we have more people over-eating than under-eating.
As this transition happened, people were unemployed, they did move to cities to find work, there were real social problems caused by this. It happened at the same time that cottage industries were getting automated, hand looms becoming power-looms, weaving becoming programmable with punch cards. This is why communism was invented when it was invented, why it became popular when it did.
And now we have fast-fashion, with clothes so fragile that they might not last one wash, and yet still spend a lower percentage of our incomes on clothes than the pre-industrial age did. Even when demand is boosted by having clothes that don't last, we still make enough to supply demand.
Lumberjacks still exist despite chainsaws, and are so efficient with them that the problem is we may run out of rainforests.
Are there any switchboard operators around any more, in the original sense? If I read this right, the BLS groups them together with "Answering Service", and I'm not sure how this other group then differs from a customer support line: https://www.bls.gov/oes/2023/may/oes432011.htm
> 2. Writing code continues to be a very late step of the overall software development process. Even if all my code was written for me, instantly, just the way I would want it written, I still have a full-time job.
This would be absolutely correct — I've made the analogy to Amdahl's law myself previously — if LLMs didn't also do so many of the other things. I mean, the linked blog post is about answering new-starter questions, which is also not the only thing people get paid to do.
Now, don't get me wrong, I accept the limitations of all the current models. I'm currently fairly skeptical that the line will continue to go up as it has been for very much longer… but "very much longer" in this case is 1-2 years, room for 2-4 doublings on the METR metric.
Also, I expect LLMs to be worse at project management than at writing code, because code quality can be improved by self-play and reading compiler errors, whereas PM has slower feedback. So I do expect "manage the AI" to be a job for much longer than "write code by hand".
But at the same time, you absolutely can use an LLM to be a PM. I bet all the PMs will be able to supply anecdotes about LLMs screwing up just like all the rest of us can, but it's still a job task that this generation of AI is still automating at the same time as all the other bits.
I agree mostly, though personally I expect LLMs to basically give me whitewashing. They don't innovate. They don't push back enough or take a step back to reset the conversation. They can't even remember something I told them not to do 2 messages ago unless I twist their arm. This is what they are, as a technology. They'll get better. I think there's some impact associated with this, but it's not a doomsday scenario like people are pretending.
We are talking about trying to build a thing we don't even truly understand ourselves. It reminds me of That Hideous Strength where the scientists are trying to imitate life by pumping blood into the post-guillotine head of a famous scientist. Like, we can make LLMs do things where we point and say, "See! It's alive!" But in the end people are still pulling all the strings, and there's no evidence that this is going to change.
1 reply →
[dead]
An engine performs a simple mechanical operation. Chess is a closed domain. An AI that could fully automate the job of these new hires, rather than doing RAG over a knowledge base to help onboard them, would have to be far more general than either an engine or a chessbot. This generality used to be foregrounded by the term "AGI." But six months to a year ago when the rate of change in LLMs slowed down, and those exciting exponentials started to look more like plateauing S-curves, executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.
I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can, and something that can reason symbolically about novel domains as well as we can. I've seen about enough from LLMs, and I agree with the critique that som type of breakthrough neuro-symbolic reasoning architecture will be needed. The article is right about one thing: in that moment AI will overtake us suddenly! But I doubt we will make linear progress toward that goal. It could happen in one year, five, ten, fifty, or never. In 2023 I was deeply concerned about being made obsolete by AI, but now I sleep pretty soundly knowing the status quo will more or less continue until Judgment Day, which I can't influence anyway.
I think a lot about how much we altered our environment to suit cars. They're not a perfect solution to transport, but they've been so useful we've built tons more road to accommodate them.
So, while I don't think AGI will happen any time soon, I wonder what 'roads' we'll build to squeeze the most out of our current AI. Probably tons of power generation.
This is a really interesting observation! Cars don't have to dominate our city design, and yet they do in many places. In the USA, you basically only have NYC and a few less convenient cities to avoid a city designed for cars. Society has largely been reshaped with the assumption that cars will be used whether or not you'd like to use one.
What would that look like for navigating life without AI? Living in a community similar to the Amish or Hasidic Jews that don't integrate technology in their lives as much as the average person does? That's a much more extreme lifestyle change than moving to NYC to get away from cars.
"Tons of power generation?" Perhaps we will go in that direction (as OpenAI projects), but it assumes the juice will be worth the squeeze, i.e., that scaling laws requiring much more power for LLM training and/or inference will deliver a qualitatively better product before they run out. The failure of GPT 4.5, while not a definitive end to scaling, was a pretty discouraging sign.
Customer service will be almost fully automated, and human customers will be forced to adapt to the bots.
4 replies →
We didn't just build roads, we utterly changed land-use patterns to suit them.
Cities, towns, and villages (and there were far more of the latter then) weren't walkable out of choice, but necessity. At most, by the late 19th century, urban geography was walkable-from-the-streetcar, and suburbs walkable-from-railway-station. And that only in the comparatively few metros and metro regions which had well-developed streetcar and commuter-rail lines.
With automobiles, housing spread out, became single-family, nuclear-family, often single-storey, and frequently on large lots. That's not viable when your only options to get someplace are by foot, or perhaps bicycle. Shopping moved from dense downtowns and city-centres (or perhaps shopping districts in larger cities) to strips and boulevards. Supermarkets and hypermarkets replaced corner grocery stores (which you could walk to and from with your groceries in hand, or perhaps in a cart). Eventually shopping malls were created (virtually always well away from any transit service, whether bus or rail), commercial islands in shopping-lot lakes. Big-box stores dittos.
It's not just roads and car parks, it's the entire urban landscape.
AI, should this current fad continue and succeed, will likely have similarly profound secondary effects.
Remember, these companies (including the author) have an incentive to continue selling fear of job displacement not because of how disruptive LLMs are, but because of how profitable it is if you scare everyone into using your product to “survive”.
To companies like Anthropic, “AGI” really means: “Liquidity event for (AI company)” - IPO, tender offer or acquisition.
Afterwards, you will see the same broken promises as the company will be subject to the expectations of Wall St and pension funds.
There is a lot of critihype
https://sts-news.medium.com/youre-doing-it-wrong-notes-on-cr...
> executives conveniently stopped using the term "AGI," preferring weasel-words like "transformative AI" instead.
Remember when "AGI" was the weasel word because 1980s AI kept on not delivering?
> I'm still waiting for something that can learn and adapt itself to new tasks as well as humans can
That's highly irrelevant because if it were otherwise, we would already be replaced. The article was talking about the future.
The article was speculating about the future.
1 reply →
> An engine performs a simple mechanical operation
It only appears “simple” because you're used to see working engines everywhere without never having to maintain them, but neither the previous generations nor the engineers working on modern engines would agree with you on that.
An engine performs “a simple mechanical operation” the same way an LLM performs a “simple computation”.
People are not simple machines or animals. Unless AI becomes strictly better than humans and humans + AI, from the perspective of other humans, at all activities, there will still be lots of things for humans to do to provide value for each other.
The question is how do our individuals, and more importantly our various social and economic systems handle it when exactly what humans can do to provide value for each other shifts rapidly, and balances of power shift rapidly.
If the benefits of AI accrue to/are captured by a very small number of people, and the costs are widely dispersed things can go very badly without strong societies that are able to mitigate the downsides and spread the upsides.
I'm optimistic.
Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. In the 50's and 60's we replaced all these people with computers. An entire career of "bank clerk" vanished, and it was a net good for humanity. The cost of bank transactions came down (by a lot!), banks became more responsive and served their customers better. And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.
There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way. They're boring jobs (for most people doing them) and having humans do them makes administration slow and expensive. Automating them will be a net good for humanity. Imagine if "this meeting could have been an email" actually moves to "this meeting never happened at all because the person making the decision just told the LLM and it did it".
You are right that the danger is that most of the benefits of this automation will accrue to capital, but this didn't happen with the bank clerk automation - bank customers accrued a lot of the benefits too. I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better.
"I suspect the same will be true with this automation - if we can create and scale organisations easier and cheaper without employing all the admin staff that we currently do, then maybe we create more agile, responsive, organisations that serve their customers better."
I'm not sure most of those organizations will have many customers left, if every white collar admin job has been automated away, and all those people are sitting unemployed with whatever little income their country's social safety net provides.
Automating away all the "boring jobs" leads to an economic collapse, unless you find another way for those people to earn their living.
4 replies →
An ATM is a reliable machine with a bounded risk - the money inside - while an AI agent could steer your company into bankruptcy and have no liability for it. AI has no skin and depending on application, much higher upper bound for damage. A digit read wrong in a medical transcript, patient dies.
> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.
Managing risks, can't automate it. Every project and task needs a responsibility sink.
2 replies →
> Banks used to have rooms full of bank clerks who manually did double-entry bookkeeping for all the bank's transactions. For most people, this was a very boring job, and it made bank transactions slow and expensive. > > And the people who had to do double-entry bookkeeping all day long got to do other, probably more interesting, jobs.
I don't mean to pick on your example too much. However, when I worked in financial audit, reviewing journal entries spit out from SAP was mind numbingly boring. I loved doing double-entry bookkeeping in my college courses. Modern public accounting is much, much more boring and worse work than it was before. Balancing entries is enjoyable to me. Interacting with the terrible software tools is horrific.
I guess people that would have done accounting are doing other, hopefully more interesting jobs in the sense that absolute numbers of US accountants is on a large decline due to the low pay and the highly boring work. I myself am certainly one of them as a software engineer career switcher. But the actual work for a modern accountant has not been improved in terms of interesting tasks to do. It's also become the email + meetings + spreadsheet that you mentioned because there wasn't much else for it to evolve into.
2 replies →
> There are a ton of current careers that are just email + meetings + powerpoint + spreadsheet that can go the same way.
it's interesting how it's never your job that will be automated away in this fantasy, it's always someone else's.
1 reply →
"benefits" = shareholder profits ++
Workshopping this tortured metaphor:
AI, at the limit, is a vampiric technology, sucking the differentiated economic value from those that can train it. What happens when there are no more hosts to donate more training-blood? This, to me, is a big problem, because a model will tend to drift from reality without more training-blood.
The owners of the tech need to reinvest in the hosts.
Realistically, at a certain point the training would likely involve interaction with reality (by sensors and actuators), rather than relying on secondhand knowledge available in textual form.
5 replies →
There's only so much you can learn from humans. AI didn't get superhuman in go (game) by financing more new good human go players. It just played with itself even discarding human source knowledge and achieved those levels.
> What happens when there are no more hosts to donate more training-blood?
LLMs have over 1B users and exchange over 1T tokens with us per day. We put them through all conceivable tasks and provide support for completing those tasks, and push back when the model veers off. We test LLM ideas in reality (like experiment following hypothesis) and use that information to iterate. These logs are gold for training on how to apply AI in real world.
People are animals.
When horses develop technology and create all sorts of jobs for themselves, this will be a good metaphor.
3 replies →
I'd be more worried about the implicit power imbalance. It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
Yeah, from the perspective of the ultra-wealthy us humans are already pretty worthless and they'll be glad to get rid of us.
But from the perspective of a human being, an animal, and the environment that needs love, connection, mutual generosity and care, another human being who can provide those is priceless.
I propose we break away and create our own new economy and the ultra-wealthy can stay in their fully optimised machine dominated bunkers.
Sure maybe we'll need to throw a few food rations and bags of youthful blood down there for them every once in a while, but otherwise we could live in an economy that works for humanity instead.
7 replies →
> It's not what can humans provide for each-other, it's what can humans provide for a handful of ultra-wealthy oligarchs.
You can definitely use AI and automation to help yourself and your family/community rather than the oligarchs. You set the prompts. If AI is smart enough to do your old job, it is also smart enough to support you be independent.
I was trying to phrase something like this, but you said it a lot better than I ever could.
I can’t help but smile at the possibility that you could be a bot.
I may have developed some kind of paranoia reading HN recently, but the AI atmosphere is absolutely nuts to me. Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population? And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective? And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?". And the guy writing this works at Anthropic? The very guy who makes this thing happen, but is only able to conclude this with "I very much hope we'll get the two decades that horses did". What the hell.
I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity and so many of its outputs. I see it in the writing of leaders within VC firms and AI companies but I also see it in ordinary conversations on the caltrain or in coffee shops.
Friendship, love, sex, art, even faith and childrearing are opportunities for substitution with AI. Ask an AI to create a joke for you at a party. Ask an AI to write a heartfelt letter to somebody you respect. Have an AI make a digital likeness of your grandmother so you can spend time with her forever. Have an AI tell you what you should say to your child when they are sad.
Hell. Hell on earth.
If you want another side data point, most people I know both in Japan and Canada use some sort of an AI as a replacement for any kind of query. Almost nobody in my circles are in tech or tech-adjacent circles.
So yeah, it’s just everyone collectively devaluing human interaction.
13 replies →
> I have been completely shocked by the number of people in the tech industry who seem to genuinely place no value on humanity [...]
Who do they think will make their ventures profitable? Who do they think will take their dollars and provide goods and services in exchange?
If automation reaches the point where 99% of humans add no value to the "owners" then the "owners" will own nothing.
12 replies →
Making predictions on how it will turn out VS designing how it should be. Up til now, powerful people needed lots and lots of other humans to sustain their power&life. Thus that dependency gave the masses leverage. Now I'd like a society we're everyone is valued for being human and stuff. With democracies we got quite far in that direction. Attempts to go even further... Let's just say "didn't work out". And right now, especially in the US, the societal system seems to go back to "power" instead rules.
Yeah, I see a bleak future ahead. Guess that's life, after all.
1 reply →
Those nerds can now develop an AI robot to make love to their wives while they get back to blogging about accelerationism with all the time they freed up.
> no value on humanity
It's practically the definition of psychopathy.
That's what unfettered capitalism gets you
I can't say I'm shocked. Disappointed, maybe, but it's hardly surprising to see the sociopathic nature in the people fighting tooth and nail for the validation of venture capitalists who will not be happy until they own every single cent on earth.
There are good people everywhere, but bring good and ethical stands in the way of making money, so most of the good people lose out in the end.
AI is the perfect technology for those who see people as complaining cogs in an economic machine. The current AI bubble is the first major advancement where these people go mask off; when people unapologetically started trying to replace basic art and culture with "efficient" machines, people started noticing.
[dead]
[flagged]
16 replies →
"Hell on Earth" - I don't think there is a more succinct or accurate way to describe the current environment.
I think, like the Bill Gates haters who interpret him talking about reducing the rate of birth in Africa as wanting to kill Africans, you're interpreting it wrong.
The graph says horse ownership per person. People probably stopped buying horses, they let theirs retire (well, to be honest, probably also sent to the glue factory), and when they stopped buying new horses, horse breeding programs slowed down.
I wish the author had had the courage of their convictions to extend the analogy all the way to the glue factory. It’s what we are all thinking.
8 replies →
I don’t think you’re realizing that the OP understands this, and that in this analogy, the horses are human beings
17 replies →
One could argue that the quality of life per horse went up, even if the total number of horses went down. Lots more horses now get raised in farms and are trained to participate in events like dressage and other equestrian sports.
5 replies →
> Bill Gates haters who interpret him talking about reducing the rate of birth in Africa
I'm not up to speed here -- is Bill Gates doing work to reduce the birth rates in Africa?
2 replies →
We don't know what the author had in mind, but one has to really be tone deaf to let the weirdness of the discussion go unnoticed. Take a look at the last paragraphs in the text again:
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
While most of the text is written from cold economic(ish) standpoint it is really hard not to get bleak impression from it. And the last three sentences express that in vague way too. Some ambiguity is left on purpose so you can interpret the daunting impression your way.
The article presents you with crushing juxtaposition, implicates insane dangers, and leaves you with the feeling of inevitability. Then back to work, I guess.
10 replies →
Horses are pretty and won't try to kill you for "your" food.
Well, in this case corporations stop buying people and just fire them instead of letting them retire. Or an army of Tesla Optimi will send people to the glue factory.
That at least is the fantasy of these people. Fortunately. LLMs don't really work, Tesla cars are still built by KUKA robots (while KUKA has a fraction of Tesla's P/E) and data centers in space are a cocaine fueled dream.
> And the article is not written in any kind of cautionary humanitarian approach, but rather from perspective of some kind of economic determinism? Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?
One of the many terrible things about software engineers their the tendency to think and speak as if they were some kind of aloof galaxy-brain, passively observing humanity from afar. I think that's at least partially the result of 1) identifying as an "intelligent person" and 2) computers and the internet allowing them to in-large-part become disconnected from the rest of humanity. I think they see that aloofness as being a "more intelligent" way to engage with the world, so they do it to act out their "intelligence."
I always thought intentionally applying an emotional distance was a strategy to help us see what's really happening, since allowing emotions to creep in causes us reach conclusions we want (motivated reasoning) instead of conclusions that reflect reality. I find it a valuable way to think. Then there's always the fact that the people who control the world have no emotional attachment to you either. They see you as something closer to a horse than their kin. I imagine a healthy dose of self-dehumanization actually helps us understand the current trajectory of our future. And people tend to vastly overvalue our "humanity" anyway. I'm guessing the ones that displaced horses didn't give much of a fuck about what happened to horses.
I wish I knew what you were so I could say "one of the many terrible things about __" about you. Anyway, I think you have an unhealthy emotional attachment to your emotions.
1 reply →
Thus strikes more in the tone of Orwell who used a muted emotional register to elicit a powerful emotional response from the reader as they realize the horror of what’s happening.
> Have you ever thought that you would see a chart showing [...]
Yes, actually, because this has been a deep vein of writing for the past 100 or more years. There's The Phools, by Stanislav Lem. There's the novels written by Boris Johnson's father that are all about depopulation. There's Aldous Huxley's Brave New World. How about Logan's Run? There has been so much writing about the automation / technology apocalypse for humans in the past 100 years that it's hard to catalog it -- much of what I have read or seen go by in the vein I've totally forgotten.
It's not remotely a surprise to see this amp up with AI.
Yeah, I am familiar with these works of art and probably most people are. However, they were mostly speculative. Now we are facing some of their premises in the real world. And the guys who push the technology in a reckless way seem to notice this, but just nod their heads and carry on.
At long last, we have created the Torment Nexus from classic sci-fi novel Don't Create The Torment Nexus.
1 reply →
> Have you ever thought that you would see a chart showing how population of horses was decimated by the mass introduction of efficient engines accompanied by an implication that there is a parallel to human population?
Yes, here's a youtube classic that put forth the same argument over a decade ago, originally titled "Humans need not apply": https://youtu.be/7Pq-S557XQU
Oh, _now_ computer industry people are worried? Kind of late to the party.
Computerization, automation and robotics, document digitization, the telecoms and wireless revolution, etc. have been upending peoples' employment on a massive scale since before the 1970s. The reaction of the technologists has been a rather insensitive "adapt or die", "go and retrain", and analogies to buggy whip manufacturers when the automobile became popular. The only reason people here suddenly give a hoot is because they think the crosshairs are drifting towards them.
I fully admit this describes me.
> the AI atmosphere is absolutely nuts to me
It reminds me of "You maniacs! You blew it up! Goddamn you all to hell!" from the original Planet of the Apes (1968), https://youtu.be/mDLS12_a-fk?t=71
Quite ironically, the scene features a horse.
It's been a decade or so but I'm mostly called "resource" at work, as in Human Resource. Barely collegue, comrade, co-worker... just a resource, a plug in the machine that needs to be replaced by an external resource to improve profit margins.
You can kind of separate the technical side of what will likely happen - AI get smarter and can do the jobs - with how we deal with that. Could be heaven like with abundance and no one needs to work, or post apocalyptic dystopia or likely somewhere in the middle.
We collectively have a lot of choice on the how we deal with it part. I'm personally optimistic that people will vote in people friendly policies when it comes to it.
Not seeing any horse heavens, do you have reason to believe humans (i.e. those not among the ruling class) are going to have a different fate from the horses?
I agree we can kinda make the argument that abundance is soon upon us, and humanity as a whole embraces the ideas of equality and harmony etc etc... but still there's a kinda uncanny dissociation if you're happily talking about horses disappearing and humans being next while you work on the product that directly causes your prediction to come true and happen earlier...
2 replies →
Looking at recent election results, what gives you that confidence?
It isn't just AI. So much of the US "Tech"/VC scene is doing outright evil stuff, with seemingly zero regard for any consequence or even a shred of self awareness.
So much money is spent on developing gambling, social media, crypto (fraud and crime enabler) and surveillance software. All of these are making people's lives worse, these companies aren't even shy about it. They want to track you, they want you to spend as much time as possible on their products, they want to make you addicted to gambling.
Just by how large these segments are, many of the people developing that software must be posting here, but I have never seen any actual reflection on it.
Sure, I guess developing software making people addicted to gambling pays the bills (and more than that), but I haven't seen even that. These industries just exist and people seem to work for them as if it was just a normal job, with zero moral implications.
My experience so far has been that the knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.
In this instance, in particular, I wouldn't expect our preferences to bear any relevance.
> knowledge of what should and shouldn't be, while important, bears no predictive power whatsoever as to what actually ends up happening.
I don’t know if you are intentionally being vague and existential here. However, context matters, and the predictive power is zero sounds unreasonable in the face of history.
I think humans learning that diseases were affecting us and thus leading to solutions like antibiotics and vaccines. It was not guaranteed, but I’m skeptical of the predictive power being zero.
> And barely anyone shares a thought like "technology should be warranted by the populace, not the other way around?"
It shines through that the most fervent AI Believers are also Haters of Humans.
Look at the current political environment.
In the US at least, there is a Congress incapable of taking action and a unilateral President fully on the side of tech CEOs with the heaviest investments in AI.
There is no evidence supporting short term optimism. Every indication the large corporations dictating public policy will treat us exactly like those horses when it comes to economic value.
I took the article as meaning white collar tech jobs that will go away, so those people will need to pivot their career, not humans.
However, it does seem like time for humanity to collectively think hard about our values and goals, and what type of world and lives we want to have in an age where human thought, and perhaps even human physical labor are economically worthless. Unfortunately this could not come at a worse time with humanity seemingly experiencing a widespread rejection of ideals like ethics, human rights, and integrity and embracing fascism and ruthless blind financial self interest as if they were high minded ideals.
Ironically, I think tech people could learn a lot here from groups like the Amish- they have clearly decided what their values and goals are, and ruthlessly make tech serve them, instead of the other way around. Despite stereotypes, Amish are often actually heavy users of, and competent with modern tech in service of making a living, but in a way that enforces firm boundaries about not letting the tech usurp their values and chosen way of life.
> What the hell
It was always like this. Look at the history, and sometimes quite recent - people were always treated like a tool - for getting rich, for getting in power, to conquer other countries, to serve them.
It's interesting though how the narrative is all bright-eyed idealism, make the world a better place, progress, etc until at some point the masks go off and suddenly it's "always has been, move along, nothing to see here"...
The implication is very clearly about “killing” jobs, not killing people.
But what happens when the people without jobs can’t buy food and starve to death?
1 reply →
Incentives rule everything.
For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.
Today, the stock market and material wealth dominates. If elite dominance of the means of production requires the immiseration of most of the public, that's what we'll get.
> For the Romans, winning wars was the main source of elite prestige. So the Empire had to expand to accommodate winning more wars.
That's almost 100% backwards. The Republic expanded. The Empire, not so much.
2 replies →
Even scarier when you consider that this entire technology has reached the public only three years ago.
> Have you ever thought that you would be compared to a gasoline engine and everyone would discuss this juxtaposition from purely economic perspective?
Not sure if by accident or not, but that’s what we are according today’s “tech elite”.
https://www.goodreads.com/work/quotes/55660903-patchwork-a-p...
I think we have a bunch of people in the United States who see what we elected for leadership and the choices he made to advise him, and they have given up all hope. That despondent attitude is infusing their opinions on everything. But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.
And low information business leaders will attempt to do all the awful things described here and the free market will eliminate them from the game grid one horrible boss at a time. But if you surround yourself with the AI doomers and bubblers, how will you ever encounter or even consider positive uses of the technology? What an awful place to work Anthropic must be if they truly believe they are working on the metaphorical equivalent of the Alpha Omega bomb. Spoilers: they're not.
Meanwhile, in the rest of the world, many look forward to harnessing AI to ameliorate hunger, take care of the elderly, and perform the more dangerous and tedious jobs out there. Anthropic guy needs to go get a room with Eliezer Yudkowsky. I guess the US is about get horsed by the other 96% of the planet.
Go ahead, compare me to a horse, a gasoline engine, or even call me a meatbag. Have we become little more than Eloi snowflakes to be so offended by that?
But I guess as long as an electoral majority here continues to cheer on one man draining the juice of this country down to a bitter husk, the fun and games will continue.
> But chin up, he's really old, and he doesn't seem very healthy or he'd be out there leading the charge throwing those rallies every weekend of which he used to be so fond.
At this point in time, his whimsy is the only thing holding back younger, more extreme acolytes from doing what they want. Once he's gone, lol.
2 replies →
Minor nit:
Machines to “take care of the elderly” is one of the worst possible uses of this technology. We desperately need more human interaction between the old and the young, not less.
Yes. Follow in the path of the tech leaders. They are optimists. They totally aren't building doomsday bunkers or trying to build their data centers with their own nuclear power plants to remove them from society and create self contained systems. Oh wait. Crap...
1 reply →
The threat isn't to population, it's to jobs (at least so far) but yeah.
American culture actively punishes compassion, then gaslights you about it.
https://www.census.gov/library/visualizations/interactive/te... Look at all the professions on the bottom right: Teachers, therapists, clergy, social workers, etc. It’s not a coincidence that cruel people take top positions.
Money isn't the only thing a job provides. Those are all professions that provide a sense of meaning, so monetary compensation doesn't need to be as high to attract and keep people.
3 replies →
Is it good when number of human go up? Is bad when go down?
I would say that it is bad when it has large derivative (positive or negative). However, the problem is not about >number of human beings< but about making agency that existing people have obsolete.
1 reply →
It's bad if it goes down by more than about 1.2% per year. That would mean zero births, present-day natural deaths. Of course zero births isn't presently realistic, and we should expect the next 10-30 years to significantly increase human lifespan. If we assume continued births at the lowest rates seen anywhere on the planet, and humans just maxing out the present human lifespan limit, then anything more than about a 0.5% decrease means someone is getting murked.
That depends on what you think jobs and the economy are for, generally.
If you think the purpose of the economy is for the economy to be good then it doesn't matter. If you think it exists to serve humanity then... You really wouldn't need to ask the question, I imagine.
2 replies →
it's a con job and strawman take. if we collectively think token generators can replace humans completely, well then we've already lost the plot as a global society
> I may have developed some kind of paranoia reading HN recently
My comments being downvoted, pretty rare lately, were about never discussed but legitimate points about AI that I validated IRL. I have no resonance about the way AI is discussed on HN and IRL, to the point that I can't rule out more or less subtle manipulation on the discussions.
I don't think that bots have taken over HN. I meant that the frontier of the tech research brags about their recklessness here and the rest of us have become bystanders to this process. Gives me goosebumps.
I wouldn't read too much into it. Anytime I post something silly and stupid, it becomes the top comment. Anytime I post something important, I get downvotes. That's just normal. I think that's just human nature...
And the votes are pretty random too. Sometimes it'll go from -5 to +10 in the span of a few hours. Just depends on who's online at the time...
And yet don't they pull on our heartstrings? Isn't that funny? A random number generator for the soul...
isn't this literally swift's laputa?
Honestly I can't tell if your incredulity is at the method of analysis for being tragically mistaken or superficial in some way, at the seemingly dehumanizing comparison of beloved human demonstrations of skill (chess, writing) to lowest common denominator labor, or the tone of passive indifference to computers taking over everything.
I think the comparisons are useful enough as metaphors, though I wonder at analysis, because it sounds like if someone took a Yudkowsky idea and talked about it like a human, which might make a bad assumption go down more smooth than it should. But I don't know.
I'd like to note here that the lifespan of a horse is 25-30 years. They were phased out not with mass horse genocide, but likely in the same way we phase out Toyota Corollas that have gotten too old. Owners simply didn't buy a new horse when the old one wore out, but bought an automobile instead.
Economically it is no different from the demand for Mitsubishi's decreasing except the vehicle in this case eats grass, poops, and feels pain.
If you want to analogize with humans, a gradual reduction in breeding (which is happening anyways with or without AI) is probably a stronger analogy than a Skynet extinction scenario.
Truth is this is no different than the societal trends that were introduced with industrialization, simply accelerated on a massive scale.
The threshold for getting wealth through education is bumping up against our natural human breeding timeline, delaying childbirth past natural optimal human fertility ages in the developed world. The amount of education needed to achieve certain types of wealth will move into the decades causing even more strain on fertility metrics. Some people will decide to have more kids and live off purely off whatever limited wellfare the oligarchs in charge decide is acceptable. Others will delay having children far past natural human fertility timespans or forgo having children at all.
If we look at it this way, a reduction in human population would be contingent on whether you think human beings exist and are bred for the purposes of labor.
I believe most people would agree with me that the answer is NO.
The analogy to horses here then is not individuals, but specific types of jobs.
Honestly, the answer for me is yes. I had expected it. The signs were in all the comments that take the market forces for granted. All the comments that take capitalism as a given and immutable law of nature. They were in all the tech bros that never ever wanted to change anything but the number of zeros in their bank account after a successful exit. So yes, I had that thought you are finally having too.
I've also noticed that LLMs are really good at speeding up onboarding. New hires basically have a friendly, never tired mentor available. It gives them more confidence in the first drafted code changes / design docs. But I don't think the horse analogy works.
It's really changing cultural expectations. Don't ping a human when an LLM can answer the question probably better and faster. Do ping a human for meaningful questions related to product directions / historical context.
What LLMs are killing is:
- noisy Slacks with junior folks questions. Those are now your Gemini / chat gpt sessions.
- tedious implementation sessions.
The vast majority of the work is still human led from what I can tell.
That sounds like a horrible onboarding experience. Human mentors provide a lot more than just answering questions, like providing context, comraderie or social skills, or even coping mechanisms. Starting a new job can be terrifying for juniors, and if their only friend is faceless chat bot...
You're right. We need to keep tabs on the culture for new hires for the reasons you mentioned. LLMs are really good at many onboarding tasks, but the social ones.
I think done right it is a superior onboarding experience. As a new hire, you no longer have to wait for your mentor to be available to learn some badly documented tech things. This is really empowering some of them. The lack of building human context / connections etc is real, and I don't think LLMs can meaningfully help there. Hence my skepticism for the horse analogy.
1 reply →
This sounds horrible. Onboarding should ideally be marginally about the "what". After all, we already have a very precise and non ambiguous system to tell what the system does: the code.
What I want to know when I join a company is "why" the system does what it does. Sure, give me pointers, some overview of how the code is structured, that always helps, but if you don't tell me why how am I supposed to work?
$currentCompany has the best documentation I've seen in my career. It's been spun off from a larger company, from people collaborating asynchronously and remotely whenever they had some capacity.
No matter how diligent we've been, as soon as the company started in earnest and we got people fully dedicated to it, there's been a ton of small decisions that happened during a quick call, or on a slack thread, or as a comment on a figma design.
This is the sort of "you had to be there" context the onboarding should aim to explain, and I don't see how LLMs help with that.
you still lose a bit from not having those juniors' questions around - where is your documentation sucking or your code is confusing?
We are now at a point where the tech can help with both of those, today. You can have a cc session "in a loop" going through your docs / code and try to do x and y, and if it gets stuck, that's a pretty good signal that something sucks there. At least you can get a heatmap of what works ootb, and what needs more eyes.
Both questions are getting scary good answers from the latest models. Yes, I tried, on a large proprietary code base which shouldn’t be included in any training set.
[dead]
Software engineers used to know that measuring lines of code written was a poor metric for productivity...
https://www.folklore.org/Negative_2000_Lines_Of_Code.html
Ctrl-F 'lines', 0 results
Ctrl-F 'code', 0 results
What is this comment about?
"The LLM can write lines of code, sure, but can it be productive?" is, I think, the implied question.
The linked short story is barely 5 paragraphs long. You could have just read it instead of writing an insubstantial remark like this. It’s a fun anecdote about a famous programmer (Bill Atkinson).
1 reply →
Charitably I'm guessing it's supposed to be an allusion to the chart with cost per word? Which is measuring an input cost not an output value, so the criticism still doesn't quite make sense, but it's the best I can do...
Measuring productivity by number of words written per day is as useless of a measure as number of lines of code written per day
Maybe it was edited. I count at least 6 instances of the word “code”
1 reply →
So, a free idea from me: train the next coding LLM to produce not regular text, but patches which shortens code while still keeping the code working the same.
They can already do that. A few months ago I played around with the kaggle python golf competition. Got to top 50 without writing a line of code myself. Modern LLMs can take a piece of code and "golf" it. And modern harnesses (cc / codex / gemini cli) can take a task and run it in a loop if you can give them clear scores (i.e. code length) and test suites outside of their control (i.e. the solution is valid or not).
No idea why you'd want this in a normal job, but the capabilities are here.
1 reply →
gonna tell claude to write all my code in one line
1 reply →
It might be better to think about what a horse is to a human, mostly a horse is an energy slave. The history of humanity is a story about how many energy slaves are available to the average human.
In times past, the only people on earth who had their standard of living raised to a level that allowed them to cast there gaze upon the stars were the Kings and there courts, vassals, and noblemen. As time passed we have learned to make technologies that provide enough energy slaves to the common man that everyone lives a life that a king would have envied in times past.
So the question arises as to whether AI or the pursuit of AGI provides more or less energy slaves to the common man?
The big problem I see with AI is that it undermines redistribution mechanisms in a novel and dangerous way; despite industrialization, human labor was always needed to actually do anything with capital, and even people born in poverty could do work to get their share of the growing economical pie.
AI kinda breaks this; there is a real risk that human labor is going to become almost worthless this century, and this might mean that the common man ends up worse off despite nominal economic growth.
Since AI is using my work without permission and capturing the value on behalf of tech companies, I feel like I am an energy slave to AI.
The goal is to eradicate the common man. Turns out you dont need a lot of energy, food, water, space if there aren't 8 billion humans to supply. It's the tech billionaires dream, replacing humans with robotic servants. Corporations do not care about the common man.
Full robotic servants are very costly, only AI servants are cheap enough. But I do think we're going to see more wars and robotic use in wars.
[dead]
Cost per word is a bizarre metric to bring up. Since when is volume of words a measure of value or achievement?
It also puts a thumb on the scale for AI, which tends to emit pages of text to answer simple questions.
Sounds like any post-secondary, graduate student, or management consultant out there being there are, very often, page/word count or hours requirements. Considering the model corpora, wordiness wins out.
The chart is actually words "thought or written" so I guess they are running up the numbers even more by counting Claudes entire inner monologue, on top of what it ultimately outputs.
There was a time when these models were novel that if use it to write for me. After a year or so the verboseness and lack of personality got old. Now all I have is a decent proofreader. Maybe they'll take over my job but I'm finding the trend going the other way right now.
It's not merely cost per word, but it is even more bizarre: "cost per word thought", whatever that is. Most of these "word thoughts" from LLMs of today are just auto-completed large dumps of text.
these are not just “words” but answers to questions from people who got a job at anthropic had…
How about we stop trying the analogy clothing on and just tell it like it is? AI is unlike any other technology to date. Just like predicting the weather, we don't know what it will be like in 20 months. Everything is a guesstimate.
This is the correct take. We all have that "Come to Jesus" moment eventually, where something blows our minds so profoundly that we believe anything is possible in the immediate future. I respect that, it's a great take to have and promotes a lot of discussion, but now more than ever we need concretes and definitives instead of hype machines and their adjacent counterparts.
Too much is on the line here regardless of what ultimately ends up being true or just hype.
It’s hard to filter the hot air from the realistic predictions. I’ve been hearing for over 10 years now that truck drivers are obsolete and that trucks will drive themselves. Yet today truck drivers are still very much in demand.
While in the last year I’ve seen generated images go from complete slop to indistinguishable from real photos. It’s hard to know what is right around the corner and what isn’t even close.
Probably the point is to think whether the horse or chess engine analogy is a good one. The premise being there will come a certain point when technology reaches a level that makes the alternative obselete suddenly. I don't have good reasons to think that AI will not be able to automate simple jobs with an acceptable error rate eventually, once that happens whole categories of jobs will evaporate. Probably dealing with more people type job, making excel models, transactions based, same thing day in day out, those teams may be gone and only a person or two to do a final review
Against that you have the Moore's law like predictions that AI would be getting to around human levels around now from Moravec and the like that have proved fairly spot on. I think you may find it's more like the AI chess ranking graph than the weather.
I think that you're on to something here, though I agree more with your first sentence than the second.
AI is not identical to, as the article compares, mechanical power.
But your weather-forecasting comment suggests a possible similarity (though not the one you go to): for all the millions-fold increase in compute power, and the increased density and specificity of meterological measurements, our accurate weather-forecasting window has only extended by a factor or so of two (roughly five days to ten). That is, there are applications for which vastly more information-processing capacity provides fairly modest returns.
And there are also those in which it's transformative. I'd put reusable rockets in that category, where we can now put sufficiently-reliable compute (and a whole bunch of rocket-related hardware) on a boost-phase rocket such that it can successfully soft-land.
For some years I've been thinking of the notion of technology not as some general principle ("efficiency" is the classic economics formulation), but as a set of specific mechanisms each of which has specific capabilities and limitations.[1] I've held pretty constant with nine of these:
1. Fuels. Applying more (or more useful) energy to a process.
2. Energy transmission and transformation.
3. Materials. Specific properties, abundance, costs, effects, limitations.
4. Process knowledge --- how to do things. What's generally described as "technical knowledge", here considered as a specific mechanism of technology.
5. Structural or causal knowledge --- why things work. What's generally described as "scientific knowledge".
6. Networks. Interactions between nodes via links, physical or virtual, over which matter, energy, information, or some mix flow. Transport, comms, power, information.
7. Systems. Constructs including sensing, processing, action, and feedback. Ranging from conceptual to mechanical to human and social.
8. Information. Sensing, perceiving, processing, storing, retrieving, and transmitting. Ranging from our natural senses to augmented ones, from symbolic systems (language, maths) to algorithms.
9. Hygiene. Sinks and unintended consequences, affecting the function and vitality of systems, and their mitigations or limits.
AI / AGI falls into the 8th category: information, specifically information processing. And as such, getting back to my original point, we can compare it with other information-related technological innovations: speech, writing, maths, boolean logic, switches (valves, transistors, etc.), information storage/retrieval, etc. And, yes, human thought processes. We do have some priors we can look at here, and they might help guide us in what a true AGI might be able to accomplish, and what its limitations may be.
It's often noted (including in this thread) that AGI would not presently be able to persist without copious human assistance, in that it's predicated on a vast technological infrastructure only a small portion of which it would be capable of substituting for. It's quite likely that AGI would be both competitive with and complementary to much human activity. In the horse analogy, it's worth noting that the first stage of mechanised transport development, with steam shipping and rail technology, horses were strongly complementary in that they fulfilled the last-mile delivery role which steamships and locomotives couldn't furnish. Horse drayage populations actually boomed during this period. It was development of ICE-powered lorries which finally out-competed the horse-drawn cart for intra-urban delivery. AGI-as-augmenting-humans is an already highly-utilised model, and will likely persist for some time. Experiments in AGI replacing humans will no doubt occur, some successful, others not. I'd suggest that my 9th category, hygiene, and specifically failure modes of AGI, will likely prove highly interesting.
Mechanised transport also relies heavily on fuels and/or energy storage. The past 200 or so years were predicated on nonrenewable fossil fuels, first coal then oil, and there were several points in that timeline where continued availability of cheap fuels was seriously in question. We're now reaching the point where even given abundant supply, the relatively-clean byproducts of use are proving, at scales of current use, incompatible with climatic stability, possibly extending to incompatible with advanced technological civilisation or even advanced life on Earth (again, category 9).
AGI relies on IC chip manufacture (the province of vanishingly few companies), on copious amounts of electricity, scarce physical resources, and various legal regimes concerning use of intellectual works, property, profit, and more (categories 1, 2, 3, and 7, at a minimum). Whether or not a world with pervasive AGI proves to be a stable or unstable point is another open question.
________________________________
Notes:
1. A sampling of prior HN discussions may be found with this search: <https://hn.algolia.com/?dateRange=all&page=0&prefix=false&qu...>.
I think my software engineering job will be safe so long as big companies keep using average code as their training set. This is because the average developer creates unnecessary complexity which creates more work for me.
The way the average dev structures their code requires like 10x the number of lines as I do and at least 10x the amount of time to maintain... The interest on technical debt compounds like interest on normal debt.
Whenever I join a new project, within 6 months, I control/maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies. The code looks for the shortest path to production and creates a moat around engineers who can make their team members' jobs easier.
IMO, it's not so different to how entrepreneurship works. But with code and processes instead of money and people as your moat. I think once AI can replace top software engineers, it will be able to replace top entrepreneurs. Scary combination. We'll probably have different things to worry about then.
The majority of drivers believe they’re better than average [1]
1: https://www.lbec-law.com/blog/2025/04/the-majority-of-driver...
> Whenever I join a new project, within 6 months, I control/maintain all the core modules of the system and everything ends up hooked up to my config files, running according to the architecture I designed. Happened at multiple companies
I am regularly tempted to do this (I have done this a few times), but unless I truly own the project (being the tech lead or something), I stop myself. One of the reasons is reluctance to trespass uninvited on someone's else territory of responsibility, even if they do a worse job than I could. The human cost of such a situation (to the project and ultimately to myself) is usually worse than the cost of living with status quo. I wonder what your thoughts are on this.
Humans don’t learn to write messy complex code. Messy, complex code is the default, writing clean code takes skill.
You’re assuming the LLM produces extra complexity because it’s mimicking human code. I think it’s more likely that LLMs output complex code because it requires less thought and planning, and LLMs are still bad at planning.
Totally agree with the first observation. The default human state seems to be confusion. I've seen this over and over in junior coders.
It's often very creative how junior devs approach problems. It's like they don't fully understand what they're doing and the code itself is part of the exploration and brainstorming process trying to find the solution as they write... Very different from how senior engineers approach coding when it's like you don't even write your first line until you have a clear high level picture of all the parts and how they will fit together.
About the second point, I've been under the impression that because LLMs are trained on average code, they infer that the bugs and architectural flaws are desirable... So if it sees your code is poorly architected, it will generate more of that poorly architected code on top. If it sees hacks in your codebase, it will assume hacks are OK and give you more hacks.
When I use an LLM on a poorly written codebase, it does very poorly and it's hard to solve any problem or implement any feature and it keeps trying to come up with nasty hacks... Very frustrating trial and error process; eats up so many tokens.
But when I use the same LLM on one of my carefully architected side projects, it usually works extremely well, never tries to hack around a problem. It's like having good code lets you tap into a different part of its training set. It's not just because your architecture is easier to build on top, but also it follows existing coding conventions better and always addresses root causes, no hacks. Its code style looks more like that of a senior dev. You need to keep the feature requests specific and short though.
1 reply →
You're just very opinionated. Other software engineers just give you space because they don't want to confront you and they don't want any conflict with you as it's just waste of time.
6 months is also average time it takes people like you to burn out on a project. Usually starting with relatively simple change/addition requested by customer that turns into 3 month long refactor - "because architecture is wrong". And we just let you do it, because we know fighting windmills is futile.
Unnecessary complexity isn’t much of a problem when the code is virtually free to maintain or throw away and replace.
Depends on the size and complexity of the problem that the system is solving. For very complex problems, even the most succinct solution will be complex and not all parts of the code can be throwaway code. You have to start stacking the layers of abstractions and some code becomes critical. Like think of the Linux Kernel, you can't throw away the Linux Kernel. You can't throw away Chromium or the V8 engine... Millions of systems depend on those. If they had issues or vulnerabilities and nobody to maintain, it would be a major problem for the global economy.
1 reply →
Even if a throw away and replace strategy is used, eventually a system's complexity will overrun any intelligence's ability to work effectively with it. Poor engineering will cause that development velocity drop off to happen earlier.
Although it's sad, I have to agree with what you're alluding to. I think there is huge overhead and waste (in terms of money, compute resources and time) hidden in the software industry, and at the end of the day it just comes down to people not knowing how to write software.
There is a strange dynamic currently at play in the software labour market where the demand is so huge that the market can bear completely inefficient coders. Even though the difference between a good and a bad software engineer is literally orders of magnitude.
Quite a few times I encountered programmers "in the wild" - in a sauna, on the bus etc, and overheard them talking about their "stack". You know the type, node.js in a docker container. I cannot fathom the amount of money wasted at places that employ these people.
I also project that actually, if we adopt LLMs correctly, these engineers (which I would say constitute a large percentage) will disappear. The age of useless coding and infinite demand is about to come to an end. What will remain is specialist engineer positions (base infra layer, systems, hpc, games, quirky hardware, cryptographers etc). I'm actually kind of curious what the effect on salary will be for these engineers, I can see it going both ways.
If they became big companies with that "unnecessary complexity", maybe code quality does not matter as much as you want to believe. Furthermore, even the fastest or well behaved horses were replaced.
This is a fun piece... but what killed off the horses wasn't steady incremental progress in steam engine efficiency, it was the invention of the internal combustion engine.
According to Wikipedia, the IC engine was invented around 1800 and only started to get somewhere in the late 1800s. Sounds like the story doesn’t change.
https://en.wikipedia.org/wiki/Internal_combustion_engine
Quite. For reference, the horse population of France didn't decline significantly until the late 1940's [0].
[0] https://pmc.ncbi.nlm.nih.gov/articles/PMC7023172/
Engines, not just steam engines.
Sure, but if you look at more complex picture of engine development you could just as easily support the proposition that programmers are currently not in any danger (by pointing out that the qualitative differences between IC and steam engines were decisive when it comes to replacing horses, and the correct analogy is that much like a steam engine could never replace a horse, a transformer model can never replace a human).
Not detracting from the article, I think it's a fun way to shake your brain into the entirely appropriate space of "rapid change is possible"!
> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
But would you rather be a horse in 1920 or 2020? Wouldn't you rather have modern medicine, better animal welfare laws, less exposure to accidents, and so on?
The only way horses conceivably have it worse is that there are fewer of them (a kind of "repugnant conclusion")...but what does that matter to an individual horse? No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion. We care more about the welfare of the 9 billion.
The equivalency here is not 9 billion versus 90 billion, it's 9 billion versus 90 million, and the question is how does the decline look? Does it look like the standard of living for everyone increasing so high that the replacement rate is in the single digit percentage range, or does it look like some version of Elysium where millions have immense wealth and billions have nothing and die off?
> No human regards it as a tragedy that there are only 9 billion of us instead of 90 billion.
I have met some transhumanists and longtermists who would really like to see some orders of magnitude increase in the human population. Maybe they wouldn't say "tragedy", but they might say "burning imperative".
I also don't think it's clearly better for more beings to exist rather than fewer, but I just want to assure you that the full range of takes on population ethics definitely exists, and it's not simply a matter of straightforward common sense how many people (or horses) there ought to be.
Engine efficiency, chess rating, AI cap ex. One example is not like the other. Is there steady progress in AI? To me it feels like it’s little progress followed by the occasional breakthrough but I might be totally off here.
The only 'line go up' graph they have left is money invested. I'm even dubious of the questions answered graph. It looks more like a feature added to internal wiki that went up in usage. Instead it's portrayed as a measure of quality or usefulness.
I think you are totally off. Individual benchmarks are not very useful on their own, but as far as I’m aware they all tell the same story of continual progress. I don’t find this surprising since it matches my experience as well.
What example do you need? In every single benchmark AI is getting better and better.
Before someone says "but benchmark doesn't reflect real world..." please name what metric you think is meaningful if not benchmark. Token consumption? OpenAI/Anthropic revenue?
Whenever I try and use a "state of the art" LLM to generate code it takes longer to get a worse result than if I just wrote the code myself from the start. That's the experience of every good dev I know. So that's my benchmark. AI benchmarks are BS marketing gimmicks designed to give the appearance of progress - there are tremendous perverse financial incentives.
This will never change because you can only use an LLM to generate code (or any other type of output) you already know how to produce and are expert at - because you can never trust the output.
7 replies →
AI is getting better at every benchmark. Please ignore that we're not allowed to see these benchmarks and also ignore that the companies in question are creating the benchmarks that are being exceeded.
What metrics, that aren't controlled by industry, show AI getting better? Generally curious because those "ranking sites" to me seem to be infested with venture capital, so hardly fair or unbiased. The only reports I hear from academia are those being overly negative on AI.
> please name what metric you think is meaningful
Job satisfaction and human flourishing
By those metrics, AI is getting worse and worse
6 replies →
OpenAI net profit.
The figures for cost are wildly off to start with.
ChatGPT was released 3 years ago and that was complete ass compared to what we have today.
Steady progress in the hardware for AI, lumpy progress in algorithms?
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
What getting high on your own supply actually looks like. These are not the types of questions most people have or need answered. It's unique to the hiring process and the nascent status of the technology. It seems insane to stretch this logic to literally any other arena.
On top of that horses were initially replaced with _stationary_ gasoline engines. Horses:Cars is an invalid view into the historical scenario.
Person whose job it is to sell AI selling AI is what I got from this post.
person whose job is to not be replaced by AI saying AI is hype is what I get from your comment
I does not work buddy. Nobody gets paid to not buy AI.
1 reply →
in Italy we have a saying for this - "innkeeper, how is the wine?"
I think it's a cool perspective, but the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative.
And that really is the entire question at this point: Which domains will AI win in by a sufficient margin to be worth it?
> the not-so-hidden assumption is that for any given domain, the efficiency asymptote peaks well above the alternative
This is an assumption for the best-case scenario, but I think you could also just take the marginal case. Steady progress builds until you get past the state of the art system, and then the switch becomes easy to justify.
"In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
And not very long after, 93 per cent of those horses had disappeared.
I very much hope we'll get the two decades that horses did."
I'm reminded of the idiom "be careful what you wish for, as you might just get it." Rapid technogical change has historically lead to prosperity over the long term but not in the short term. My fear is that the pace of change this time around is so rapid that the short term destruction will not be something that can be recovered from even over the longer term.
I just have no idea how rigerously the data was reviewed. The 95% decline simply does no compute with
4,500,000 in 1959
and even an increase to
7,000,000 in 1968
largely due to increase in recreational horse population.
https://time.com/archive/6632231/recreation-return-of-the-ho...
So that recreational existence at the leisure of our own machinery seems like an optional future humans can hope for too.
Turns out the chart is about farm horses only as counted by the USDA not including any recreational horses. So this is more about agricultural machinery vs. horses, not passenger cars.
---
City horses (the ones replaced by cars and trucks) were nearly extinct by 1930 already.
City horses were formerly almost exclusively bred on farms but because of their practical disappearance such breeding is no longer necessary. They have declined in numbers from 3,500,000 in 1910 to a few hundred thousand in 1930.
https://www2.census.gov/library/publications/decennial/1930/...
My reading of tfa is exactly that - the author is hoping that we'll have at least a generation or so to adapt, like horses did, but is concerned that it might be significantly more rapid.
To be clear though, the horses didn't adapt. Their populate was reduced by orders of a magnitude.
5 replies →
"You're absolutely right!" Thanks for pointing it out. I was expecting that kind of perspective when the author brought up horses, but found the conclusion to be odd. Turns out it was just my reading of it.
the stability of no govt faced risk over a 20% increase in horse unemployment
Someone who makes horseshoes then learns how to make carburetors, because the demand is 10x.
https://en.wikipedia.org/wiki/Jevons_paradox
In that analogy "someone" is an AI, who of course switches from answering questions from humans, to answering questions from other AIs, because the demand is 10x.
> Governments have typically expected efficiency gains to lower resource consumption, rather than anticipating possible increases due to the Jevons paradox
I think that it's true that governments want the efficiency gains but it's false that they don't anticipate the consumption increases. Nobody is spending trillions on datacenters without knowing that demand will increase, that doesn't mean we shouldn't make them efficient.
The 1220s horse bubble was a wild time. People walked everywhere all slow and then BAM guys on horses shooting arrows at you.
AI is like that, but instead with dudes in slim fitting vests blogging about alignment
This is food for thought, but horses were a commodity; people are very much not interchangeable with each other. The BLS tracks ~1,000 different occupations. Each will fall to AI at a slightly different rate, and within each, there will be variations as well. But this doesn't mean it won't still subjectively happen "fast".
Whether people are interchangeable with each other isn't the point. The point is whether AI is interchangeable with jobs currently done by humans. Unless and until AI training requires 1000 different domain experts, the current projection is that at some point AI will be interchangeable with all kinds of humans...
That looks to me like there are ~1000 interchangeable economic human roles for AI to replace.
So I guess we should check to see if computers are good at scaling or doing things concurrently. If not, no worries!
> Back then, me and other old-timers were answering about 4,000 new-hire questions a month.
> Then in December, Claude finally got good enough to answer some of those questions for us.
> … Six months later, 80% of the questions I'd been being asked had disappeared.
Interesting implications for how to train juniors in a remote company, or in general:
> We find that sitting near teammates increases coding feedback by 18.3% and improves code quality. Gains are concentrated among less-tenured and younger employees, who are building human capital. However, there is a tradeoff: experienced engineers write less code when sitting near colleagues.
https://pallais.scholars.harvard.edu/sites/g/files/omnuum592...
This tracks with my own AI usage over just this year. There have been two releases that caused step changes in how much I actually use AI:
1. The release of Claude Code in February
2. The release of Opus 4.5 two weeks ago
In both of these cases, it felt like no big new unlocks were made. These releases aren’t like OpenAI’s o1, where they introduced reasoning models with entirely new capabilities, or their Pro offerings, which still feel like the smartest chatbots in the world to me.
Instead, these releases just brought a new user interface, and improved reliability. And yet these two releases mark the biggest increases in my AI usage. These releases caused the utility of AI for my work to pass thresholds where Claude Code became my default way to get LLMs to read my code, and then Opus 4.5 became my default way to make code changes.
I would add Gemini Nano Banna Pro to that list - (its words with image ability) is amazing..
To stay within the engine analogy. We have engines that are more powerful than horses, but
1. we aren’t good at building cars yet,
2. they break down so often that using horses often still ends up faster,
3. we have dirt tracks and feed stations for horses but have few paved roads and are not producing enough gasoline.
yes and the question is do horses have 20 years or less i.e. 5 years?
> A system that costs less, per word thought or written, than it'd cost to hire the cheapest human labor on the face of the planet.
Is it really possible to make this claim given the vast sums of money that have gone in to AI/LLM training?
I'd say yes, because AI training is mostly fixed-cost and not that expensive when you compare it to raising/educating human labor.
Early factories were expensive, too (compared to the price of a horse), but that was never a show-stopper.
it's coming from an extremely biased source, that's why nobody else would make that claim
Aren't you guys looking forward to the day when we get the opportunity to go the way of all those horses? You should! I'm optimistic; I think I'd make a fine pot of glue.
AI, faster please!
Regarding horses vs. engines, what changed the game was not engine efficiency, but the widespread availability of fuel (gas stations) and the broad diffusion of reliable, cheap cars. Analogies can be made to technologies like cell phones, MP3 players, or electric cars: beyond just the quality of the core technology, what matters is a) the existence of supporting infrastructure and b) a watershed level of "good/cheap enough" where it displaces the previous best option.
And roads, and other auto-friendly (or auto-dependent) infrastructure and urban / national land-use.
Cars went from a luxury to a necessity, though largely not until after WWII in the US, and somewhat later in other parts of the world.
There remain areas where a car is not required, or even a burden. NYC, and a few major metropolitan regions, as well as poorer parts of the world (though motorcycles and mopeds are often prevalent there).
It’s both. A steam engine at 2% efficiency is good only for digging up more coal for itself, and barely so. Completely different story at 20%. Every doubling is a step function in some area as it becomes energetically and economically rational to use it for something.
People back then were primarily improving engines, not making articles about engines being better than horses. That's why it's different now.
Yet, this applies for only three industries so far - coding, marketing and customer support.
I don't think applies for general human intelligence - yet.
What is this horseshit.
What exactly does specifically engine efficiency have to do with horse usage? Cars like the Ford Model T entered mass production somewhere around 1908. Oh, and would you look at the horse usage graph around that date! sigh
The chess ranking graph seems to be just a linear relationship?
> This pink line, back in 2024, was a large part of my job. Answer technical questions for new hires.
>> Claude, meanwhile, was now answering 30,000 questions a month; eight times as many questions as me & mine ever did.
So more == better. sigh. Ran any, you know, studies to see the quality of those answers? I too can consult /dev/random for answers at a rate of gigabytes per second!
> I was one of the first researchers hired at Anthropic.
Yeah. I can tell. Somebody's high on their own supply here.
Well, for some reason horse numbers and horse usage dropped sharply at a moment in time. Probably there was some horse pandemic I forgot about.
funny how we have all of this progress yet things that actually matter (sorry chess fans) in the real world are more expensive: health care, housing, cars. and what meager gains there are seem to be more and more concentrated in a smaller group of people.
plenty of charts you can look at - net productivity by virtually any metric vs real adjusted income. the example I like are kiosks and self checkout. who has encountered one at a place where it is cheaper than its main rival and is directly attributable to (by the company or otherwise) to lower prices?? in my view all it did was remove some jobs. that's the preview. that's it. you will lose jobs and you will pay more. congrats.
even with year 2020 tech you could automate most work that needs to be done, if our industry wouldn't endlessly keep disrupting itself and have a little bit of discipline.
so once ai destroys desk jobs and the creative jobs, then what? chill out? too bad anyone who has a house won't let more be built.
To give backing i’m from Australia which has ~2.5x the median wealth per capita of US citizens but a lower average wealth. This shows through in the wealth of a typical citizen. Less homelessness, better living standards (hdi in australia is higher) etc.
Compare sorting by median vs average to get a sense of the issue; https://en.wikipedia.org/wiki/List_of_countries_by_wealth_pe...
This is a recent development where the median wealth of citizens in progressively taxes nations has quickly overtaken the median wealth of USA citizens.
All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right? Yet things gave consistently been going the other way for along time in the USA.
I think by the time the wealthy realize they're setting themselves up for the local equivalent of the French Revolution it will be a bit late. It's a really bad idea to create a large number of people with absolutely nothing to lose.
8 replies →
> All it takes is tax on the extremely wealthy and lessening taxes on the middle class… seems obvious right?
You could tax 100% of all of the top 1%'s income (not progressively, just a flat 100% tax) and it'd cover less than double the federal government's budget deficit in the US. There would be just enough left over to pay for making the covid 19 ACA subsidies permanent and a few other pet projects.
Of course, you can't actually tax 100% of their income. In fact, you'd need higher taxes on the top 10% than anywhere else in the West to cover the deficit, significantly expand social programs to have an impact, and lower taxes on the middle class.
It should be pointed out that Australia has higher taxes on their middle class than the US does. It tops out at 45% (plus 2% for medicare) for anyone at $190k or above.
If you live in New York City, and you're in the top 1% of income earners (taking cash salary rather than equity options) you're looking at a federal tax rate of 37%, a state tax rate of 10.9%, and a city income tax rate of 3.876% for a total of 51.77%. Some other states have similarly high tax brackets, others are less, and others yet use other schemes like no income tax but higher sales and property taxes.
Not quite so obvious when you look closer at it.
4 replies →
Without usa the way it is, Australia would be much less prosperous. From the perspective of employers and consumers, labor costs are the same. It’s just that in Europe and Australia, taxes are a larger percentage of cost of labor.
Those are all expensive because of artificial barriers meant to keep their prices high. Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.
Tech and AI have taken off in the US partially because they’re in the domain of software, which hasnt bee regulated to the point of deliberate inefficiency like other industries in the US.
If we had less regulation of insurance companies, do you think they’d be cheaper?
(I pick this example because our regulation of insurance companies has (unintuitively) incentivized them to pay more for care. So it’s an example of poor regulation imo)
4 replies →
> Go to any Asian country and houses, healthcare and cars are priced like commodities, not luxuries.
What do you mean? Several Asian cities have housing crises far worse than the US in local purchasing power, and I'd even argue that a "cheap" home in many Asian countries is going to be of a far lower quality than a "cheap" home in the US.
you mean the same Asia that has the same problem? USA enjoying arbitrage is not actually a solution nor is it sustainable. not to mention that if you control for certain things, like house size for instance relative to inflation adjusted income it isn't actually much different despite popular belief.
It would be kinda funny if not so tragic how economists will argue both "[productive improvement] will make things cheaper" and then in the next breath "deflation is bad and must be avoided at all costs"
But is it really, though? Dollars aren't meant to be held.
4 replies →
>in the real world are more expensive: health care, housing, cars.
Think of it another way. It's not that these things are more expensive. It's that the average US worker simply doesn't provide anything of value. China provides the things of value now. How the government corrected for this was to flood the economy with cash. So it looks like things got more expensive, when really it's that wages reduced to match reality. US citizens selling each other lattes back and forth, producing nothing of actual value. US companies bleeding people dry with fees. The final straw was an old man uniting the world against the USA instead of against China.
If you want to know where this is going, look at Britain: the previous world super power. Britain governed far more of the earth than the USA ever did, and now look at it. Now the only thing it produces is ASBOs. I suppose it also sells weapons to dictators and provides banking to them. That is the USA's future.
Yep. My grandma bought her house in ~1962 for $20k working at a factory making $2/hr. Her mortgage was $100/m; about 1 weeks worth of pay. $2/hr then is the equivalent of ~$21/hr today.
If you were to buy that same house today, your mortgage would be about $5100/m-- about 6 weeks of pay.
And the reason is exactly what you're saying: the average US worker doesn't provide as much value anymore. Just as her factory job got optimized/automated, AI is going to do the same for many. Tech workers were expensive for a while and now they're not. The problem is that there seems to be less and less opportunity where one can bring value. The only true winners are the factory owners and AI providers in this scenario. The only chance anybody has right now is to cut the middleman out, start their own business, and pray it takes off.
But the us is China's market, so the ccp goes along even though they are the producer. Because a domestic consumer economy would mean sharing the profits of that manufacturing with the workers. But that would create a middle class not dependent on the party leading (at least in their minds, and perhaps not wrongly) to instability. It is a dance of two, and neither can afford to let go. And neither can keep dancing any longer. I think it will be very bad everywhere.
Well, politically, housing becoming cheaper is considered a failure. And this is true for all ages. As an example, take Reddit. Skews younger, more Democrat-voting, etc. You'd think they'd be for lower housing prices. But not really. In fact, they make fun of states like Texas whose cities act to allow housing to become cheaper: https://www.reddit.com/r/LeopardsAteMyFace/comments/1nw4ef9/...
That's just an example, but the pattern will easily repeat. One thing that came out of the post-pandemic era is that the lowest deciles saw the biggest rises in income. Consequently, things like Doordash became more expensive, and stuff like McDonald's stopped staffing as much.
This isn't some grand secret, but most Americans who post on Twitter, HN, or Reddit consider the results some kind of tragedy, though it is the natural thing that happens when people become much higher income: you can't hire many of them to do low-productivity jobs like bus a McD's table.
That's what life looks like when others get richer relative to you. You can't consume the fruits of their labor for cheap. And they will compete for you with the things that you decided to place supply controls on. The highly-educated downwardly-mobile see this most acutely, which is why you see it commonly among the educated children of the past elite.
Thank you, I've replied too many times that if people want low priced housing, it's easily found in Texas. The replies are empty or stating that they don't want to live there because... it's Texas.
So the young want cheap affordable housing, right in the middle of Manhattan, never going to happen.
1 reply →
Housing is a funny old one and speaks to it being a human problem. One thing a lot of people dont truly engage with with the housing issue is that its a massive issue of distribution. Too many people want to live in too few places. Yes, central banks & interest rates (being too low and also now being relatively too high), nimbyism, and rent seeking play an important role too but solving the "too many people live in too few places" issue actually fixes that problem (slowly, and possibly unpalatably slow for some, but a fix nonetheless)
The key issue upstream is that too many good jobs are concentrated in too few places, and that leads to consumerism stimulating those places and making them further more attractive. Technology, through Covid, actually gave governments a get out of jail free card by allowing remote work to become more mainstream. Only to just not grasp the golden egg they were given. Pivot economies more to remote working more actively helps distribute people to other places with more affordable home. Over time, and again slowly, those places become more attractive because people now actually live there.
Existing homeowners can still wrap themselves in the warm glow of their high house prices which only loses "real" value through inflation which people tend not to notice as much.
But we decided to try to go back to the status quo so oh well
I see the issue of housing is a combination of:
- House prices increasing while wages are stagnant
- Home loans and increasing prices mean the people going for huge leverages on their home purchases
- Supply is essentially government controlled, and dependent, and building more housing is heavily politicized
- A lot of dubious money is being created, which gets converted to good money by investing it in the housing market
- Housing is genuinely difficult to build and labor and capital intensive
> The key issue upstream is that too many good jobs are concentrated in too few places
This no longer is the case with remote work on the rise, If that were the case, housing prices would increase faster in trendy overpriced places, but the increase as of late was more uniform, with places like London growing slower (or even depreciating, relatively speaking) to less in-demand places.
1 reply →
Food and clothes are much cheaper. People used to have to walk or hitchhike a lot more. People died younger, or were trapped with abusive spouses and/or parents. Crime was high. There was little economic mobility. It really sucked if you weren’t a straight white man. Houses had one bathroom. Power went out regularly. Travel was rare and expensive; people rarely flew anywhere. There was limited entertainment or opportunities to learn about the world.
yeah that my question to the author too - if A.I is to really earn its keep it means A.I should help in getting more physical products into people's hands & helping with producing more energy.
physical products & energy are the two things that are relevant to people's wellbeing.
right now A.I is sucking up the energy & the RAM - so is it gonna translate into a net positive ?
That's the question though isn't it. If everyone got a subscription to claude-$Latest would they be able to pay their rent with it?
2 replies →
It's interesting to see Cyberpunk 2077 became somehow relatable more and more.
Sci fi works on this topic are about as old as sci fi. I’m terrified the stories started hitting close to home in the past few years.
I remember reading Burning Chrome, written in 1982, and one of the characters commented on late-stage capitalism.
It's inflation, simple as that. The US left the gold standard at the exact same time that productivity diverged from wages. Coincidence? No.
Pretty much everything gets more expensive, with the outliers being tech which has gotten much cheaper, mostly because the rate at which it progresses is faster than the rate at which governments can print money. But everything we need to survive, like food, housing, etc, keeps getting more expensive. And the asset class get richer as a result.
[dead]
AI currently lacks the following to really gain a "G" and reliably be able to replace humans at scale:
- Radical massive multimodality. We perceive the world through many wide-band high-def channels of information. Computer perception is nowhere near. Same for ability to "mutate" the physical world, not just "read" it.
- Being able to be fine-tuned constantly (learn things, remember things) without "collapsing". Generally having a smooth transition between the context window and the weights, rather than fundamental irreconcilable difference.
These are very difficult problems. But I agree with the author that the engine is in the works and the horses should stay vigilant.
The work done by horses was not the only work out there. Games played by chess masters was not the only sport on the planet. Answering questions and generating content is not the only work that happens at work places.
This makes me think of another domain where it could happen: electricity generation and distribution. If solar+battery becomes cheap enough we could see the demise of the country-scale grid.
I work in the energy sector. I test high voltage gas insulated switchgear for a living.
With this setup, you would need batteries that can sustain load for weeks on end, in many parts of the world.
4000 questions a month from new hireds. How many of those were repeated many times. A lot. So if they'd built a wiki?
I am not an AI sceptic.. I use it for coding. But this article is not compelling.
my favorite part was where the graphs are all unrelated to each other
I think the author's point is that each type of job will basically disappear roughly at once, shortly after AI crosses the bar of "good enough" in that particular field.
I think the turning point will be when AI assisted individuals or tiny companies are able to deliver comparable products/value as the goliaths.
Why hasn't this happened already?
I'm willing to believe the hype on LLMs except that I don't see any tiny 1-senior-dev-plus-agents companies disrupting the market. Maybe it just hasn't happened "yet"... But I've been kind of wondering the same thing for most of 2025.
1 reply →
That would be the ideal scenario; when you can build a small business more easily.
>>This was a five-minute lightning talk given over the summer of 2025 to round out a small workshop.
Glad I noticed that footnote.
Article reeks of false equivalences and incorrect transitive dependencies.
And what happened to human population? It skyrocketed. So humans are going to get replaced by AI and human population will skyrocket again? This analogy doesn't work.
Virtual humans?
> In 1920, there were 25 million horses in the United States, 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
I really doubt horses would be ambivalent about this, let alone about anything. Or maybe I'm wrong, they were in two minds: oh dear I'm at risk of being put to sleep, or maybe it could lead to a nice long retirement out on a grassy meadow. But they're in all likelihood blissfully unaware.
This post is kind of sad. It feels like he's advocating for human depopulation since the trajectory aligns with horse populations declining by 93% also.
Indeed. I do wonder if the inventors of the "transformer architecture" knew all the potential Pandora's boxes they were opening when they invented it. Probably not.
No one wants to say the scary potential logical conclusion of replacing the last value that humans have a competitive advantage in; that being intelligence and cognition. For example there is one future scenario of humanity where only the capital and resource holders survive; the middle and lower classes become surplus to requirements and lose any power. Its already happening slowly via inflation and higher asset prices after all - it is a very real possibility. I don't think a revolution will be possible in this scenario; with AI and robotics the rich could outnumber pretty much everyone.
Not advocating, just predicting. And not necessarily actual population, just population in paid employment.
Not advocating, just warning about things to come.
Horses pull carts. Chessbots play chess. Humans do lots of things. Equivalence in one thing is not equivalence in the vast collection of things we do.
AI seems capable of doing lots of things, particularly in comparison to domain-specific programming or even domain-specific AI. Your critique doesn't seem so powerful as you might suppose.
Capable yes, but human equivalence comes at different times, which means AI human equivalnce to humans in general will be staggered, and not a sudden cliff as the author claims. But in all fairness I don't imagine this to be a powerful critique, I wouldn't be at all shocked if I'm wrong.
2 replies →
Wow! That is highly unscientific and speculative. Wow!
We still have chess grandmasters if you have noticed..
Yes, and we'll continue to have human coding competitions for entertainment purpose. Good luck trying to live off the prize money though.
Hikaru makes good money streaming on Twitch tho
1 reply →
Humans design the world to our benefit, horses do not.
Most humans don't. Only the wealthy and powerful are able to do this
And they often do it at the expense of the rest of us
Maybe I can get a job programming for the Amish.
Conclusion: Soylent..?
damn
I mean it's hard to argue that if we invented a human in a box (AGI) human work would be irrelevent. But I don't know how we could watch current AI and anyone can say we have that.
The big thing this AI boom has showed us that we can all be thankful to have seen is what a human in a box will eventually look like. The first generation of humans to be able to see that is a super lucky experience to have.
Maybe it's one massive breakthrough away or maybe it's dozens away. But there is no way to predict when some massive breakthrough will occur Illya said 5-20 that really just means we don't know.
Why a human in a box and not an android? A lot of jobs will require advanced robotics to fully automate. And then there are jobs where customer preference is for human interaction or human entertainment. It's like how superior chess engines have not reduced the profession of chess grandmasters, because people remain more interested in human chess competition.
The assumption is superhuman AGI or a stronger ASI could invent anything it needed really fast, so ASI means intelligent robots within years or months, depending on manufacturing capabilities.
LLMs can only hallucinate and cannot reason or provide answers outside of their training set distribution. The architecture needs to fundamentally change in order to reach human equivalence, no matter how many benchmarks they appear to hit.
The sometimes stumble and hallucinate out of distribution. It’s rare, it’s more rare that is actually a good hallucination, but we’ve figured out how to enrich uranium, after all.
If AI is really likely to cause a mass extinction event, then non-proliferation becomes critical as it was in the case with nuclear weapons. Otherwise, what does it really mean for AI to "replace people" outside of people needing to retool or socially awkward people having to learn to talk to people better? AI surely will change a lot, but I don't understand the steps needed to get to the highly existential threat that has become a cliché in every "Learn CLAUDE/MCP" ad I see. A period of serious unemployment, sure, but this article is talking about population collapse, as if we are all only being kept alive and fed to increase shareholder value for people several orders of magnitude more intelligent than us, and with more opposable thumbs. Do people think 1.2B people are going to die because of AI? What is the economy but people?
I don't think the people will die, just have AI do the jobs. The people will probably still be there giving instructions.
Capitalism gives, capitalism takes. Regulation will be critical so it doesn’t take too much, but tech is moving so fast even technologists, enthusiasts and domain researchers don’t know what to expect.
Point taken, but it's hard to take a talk seriously when it has a graph showing AI becoming 80% of GDP! What does the "P" even stand for then?
It’s called exponential growth and humans are well known to be almost comically bad at identifying and interpreting it.
When people make forward looking statements using the term “exponential growth”, you can always replace that with “S-curve”.
Remember when we had two weeks of data, and governments acted like Covid was projected to kill everyone by next Tuesday?
Pokens
Ironically, you could use the sigmoid function instead of horses. The training stimulus slowly builds over multiple iteration and then suddenly, flip: the wrong prediction reverses.
Great post.
This is the context wherein the valuation of AI companies makes sense, particularly those that already got a head start and have captured a large swath of that market.
Everyone is missing the real valuable point here: we never needed 90+% of horses in the first place.
Wait till the robots arrive. That they will know how to do a vast range of human skills, some that people train their whole lives for, will surprise people the most. The future shock I get from Claude Code, knowing how long stuff takes the hard way, especially niche difficult to research topics like the alternate applicable designs of deep learning models to a modeling task, is a thing of wonder. Imagine now that a master marble carver shows up at an exhibition and some sci-fi author just had robots make a perfect beautiful equivalent of a character from his novel, equivalent in quality to Michaelangelo's David, but cyberpunk.
> 25 million horses totally ambivalent to two hundred years of progress in mechanical engines.
Ambivalent??
I'm confused. Isn't the sharp decline in the graph due to the population boom?
It’s not like humans are standing still. Humans are still improving faster than ai.
Cool, now lets make a big list of technologies that didn't take off like they were expected to
I think AI is probably closer to jet engines than it is to horses.
Howso?
Terrible comparison.
Horses and cars had a clearly defined, tangible, measurable purpose: transport... they were 100% comparable as a market good, and so predicting an inflection point is very reasonable. Same with Chess, a clearly defined problem in finite space with a binary, measurable outcome. Funny how Chess AI replacing humans in general was never considered as a serious possibility by most.
Now LLMs, what is their purpose? What is the purpose of a human?
I'm not denying some legitimate yet tedious human tasks are to regurgitate text... and a fuzzy text predictor can do a fairly good job of that at less cost. Some people also think and work in terms of text prediction more often than they should (that's called bullshitting - not a coincidence).
They really are _just_ text predictors, ones trained on such a humanly incomprehensible quantity of information as to appear superficially intelligent, as far as correlation will allow. It's been 4 years now, we already knew this. The idea that LLMs are a path to AGI and will replace all human jobs is so far off the mark.
Horses never figured out how to get government bailouts.
> 90% of the horses in the US disappeared
Where did they go?
they grew old and died ?
There is a TV movie In Pursuit of Honor (1995) claiming to be based on true events. My short search online states that such things were never really documented, but it's plausible that there were similar things happening.
> In Pursuit of Honor is a 1995 American made-for-cable Western film directed by Ken Olin. Don Johnson stars as a member of a United States Cavalry detachment refusing to slaughter its horses after being ordered to do so by General Douglas MacArthur. The movie follows the plight of the officers as they attempt to save the animals that the Army no longer needs as it modernizes toward a mechanized military.
sometimes not nearly so pleasant for them.
The glue factory.
> And not very long after, 93 per cent of those horses had disappeared.
> I very much hope we'll get the two decades that horses did.
> But looking at how fast Claude is automating my job, I think we're getting a lot less.
This "our company is onto the discovery that will put you all out of work (or kill you?)" rhetoric makes me angry.
Something this powerful and disruptive (if it is such) doesn't need to be owned or controlled by a handful of companies. It makes me hope the Chinese and their open source models ultimately win.
I've seen Anthropic and OpenAI employees leaning into this rhetoric on an almost daily basis since 2023. Less so OpenAI lately, but you see it all the time from these folks. Even the top leadership.
Meanwhile Google, apart from perhaps Kilpatrick, is just silent.
At this point "we're going to make all office work obsolete" feels more like a marketing technique than anything actually connected to reality. It's sort of like how Coca-Cola implies that drinking their stuff will make you popular and well-liked by other attractive, popular people.
Meanwhile, my own office is buried in busywork that there are currently no AI tools on the market that will do the work for us, and AI entering a space sometimes increases busywork workloads. For example, when writing descriptions of publications or listings for online sales, we have to put more effort now into not sounding like it was AI-generated or we will lose sales. The AI tools for writing descriptions / generating listings are not very helpful either. (An inaccurate listing/description is a nightmare.)
I was able to help set up a client with AI tools to help him generate basically a faux website in a few hours that has lots of nice graphic design, images, etc. so that his new venture looks like a real company. Well, except for the "About Us" page that hallucinated an executive team plus a staff of half a dozen employees. So I guess work like that does get done faster now.
Well, tbf the author was hired to answer newbie questions. Perhaps the position is that of an evangelist, not a scientist.
I couldn’t have made a worse take if I tried
1 reply →
Ripping off Yuval in big style.
Yawn, another article which hand picks success stories. What about the failures? Where's the graph of flying cars? Humanoid house servant robots? 3D TVs? Crypto decentralized banking for everyone? Etc.
Anybody who tells you they can predict the future is shoveling shit in his mouth then smiling brown teeth at the audience. 10 years from now there's a real possibility of "AI" being remembered as that "stuff that almost got to a single 9 reliability but stopped there".
I've never visited this blog before but I really enjoy the synthesis of programming skill (at least enough skill to render quick graphs and serve them via a web blog) and writing skill here. It kind of reminds me of the way xkcd likes to drive home his ideas. For example, "Surpassed by a system that costs one thousand times less than I do... less, per word thought or written, than ... the cheapest human labor" could just be a throwaway thought, and wouldn't serve very well on its own, unsupported, in a serious essay, and of course the graph that accompanies that thought in Jones's post here is probably 99.9% napkin math / AI output, but I do feel like it adds to the argument without distracting from it.
(A parenthetical comment explaining where he ballparked the measurements for himself, the "cheapest human labor," and Claude numbers would also have supported the argument, and some writers, especially web-focused nerd-type writers like Scott Alexander, are very good at this, but text explanations, even in parentheses, have a way of distracting readers from your main point. I only feel comfortable writing one now because my main point is completed.)
This is another one of those apocalyptic posts about AI. It might actually be true. I recommend reading The Phools, by Stanislav Lem -- it's a very short story, and you can find free copies of it online.
Also maybe go out for some fresh air. Maybe knowledge work will go down for humans, but plumbing and such will take much longer since we'll need dextrous robots.
You know, this whole conversation reminds me of that old critique on Communism; Once the government becomes so large and encompassing, it reaches a point where it no longer needs to the people to exist, thus, people are culled by the millions, as they are simply no longer needed.
"I was one of the first researchers hired at Anthropic."
The article is a Misanthropic advertisement. The "AI" mafia feels that no one wants their products and doubles down.
They are so desperate that Pichai is now talking about data centers in space on Fox News. Next up are "AI" space lasers.
if ai takes my job, good riddance
Truly depressing to see blasé predictions of AI infra spending approaching WW2 levels of GDP as if that were remotely desirable. One, that’s never going to happen, but if it does, it’ll mean a complete failure to address actual human needs. The amount of money wasted by Facebook on the Metaverse could have ended homelessness in the US, or provided universal college. Now here we are watching multiple times that much money get thrown by Meta, Google, et al into datacenters that are mostly generating slop that’s ruining what’s left of the Internet.
I thought this was going to be about how much more intelligent horses are than AIs and I was disappointed
"If I'd asked people what they wanted, they would have said faster humans!"
yeah but machines don't produce horseshit, or do they? (said in the style of Vsauce)
> I was one of the first researchers hired at Anthropic. ... > But looking at how fast Claude is automating my job, I think we're getting a lot less.
TL;DR If your work is answer questions, that can be retrieved from a corpus of data with inverted index + embedding, you'll be obsolete pretty fast.
[dead]
[dead]
[dead]
[dead]
[dead]
It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
As the potential of AI technical agents has gone from an interesting discussion to extraordinarily obvious as to what the outcome is going to be, HN has comically shifted negative in tone on AI. They doth protest too much.
I think it's a very clear case of personal bias. The machines are rapidly coming for the lucrative software jobs. So those with an interest in protecting lucrative tech jobs are talking their book. The hollowing out of Silicon Valley is imminent, as other industrial areas before it. Maybe 10% of the existing software development jobs will remain. There's no time to form powerful unions to stop what's happening, it's already far too late.
I don't think is the case; I think what's actually going on is that the HN crowd are the people who are stuck actually trying to use AI tools and aware of their limitations.
I have noticed, however, that people who are either not programmers or who are not very good programmers report that they can derive a lot of benefit from AI tools, since now they can make simple programs and get them to work. The most common use case seems to be some kind of CRUD app. It's very understandable this seems revolutionary for people who formerly couldn't make programs at all.
For those of us who are busy trying to deliver what we've promised customers we can do, I find I get far less use out of AI tools than I wish I did. In our business we really do not have the budget to add another senior software engineer, and we don't the spare management/mentor/team lead capacity to take on another intern or junior. So we're really positioned to be taking advantage of all these promises I keep hearing about AI, but in practical terms, it saves me at an architect or staff level maybe 10% of my time and for one of our seniors maybe 5%.
So I end up being a little dismissive when I hear that AI is going to become 80% of GDP and will be completely automating absolutely everything, when what I actually spend my day on is the same-old same-old of trying to get some vendor framework to do what I want to get some sensor data out of their equipment and deliver apps to end customers that use enough of my own infrastructure that they don't require $2,000 a month of cloud hosting services per user. (I picked that example since at one customer, that's what we were brought in to replace: that kind of cost simply doesn't scale.)
I value this comment even though I don't really agree about how useful AI is. I recognise in myself that my aversion to AI is at least partly driven by fear of it taking my job.
I worked for a company that was starting to shove AI incentives down the throat of every engineer as our product got consistently worse and worse due to layoffs and the perceived benefits of AI which were never realized. When you look at the companies that have shifted to 'AI first' and see them shoveling out garbage that barely works, it should be no surprised that people both aware of how the sausage is made and not are starting to hate it.
> The hollowing out of Silicon Valley is imminent
I think AI tools are great, and I use them daily and know their limits. Your view is commonly held by management or execs who don't have their boots on the ground.
That's what I've observed. I currently have more work booked than I can reasonably get done in the next year, and my customers would be really delighted if I could deliver it to them sooner, and take on even more projects. But I have yet to find any way that just adding AI tools to the mix makes us orders-of-magnitude better. The most I've been able to squeeze out is a 5% to 10% increase.
But they do have their hands on your budget, and they are responsible for creating and filling positions.
I’m not anti-AI; I use it every day. But I also think all this hand-wringing is overblown and unbalanced. LLMs, because of what they are, will never replace a thoughtful engineer. If you’re writing code for a living at the level of an LLM then your job was probably already expendable before LLMs showed up.
except you know, you had a job. and coming out of college could get one… if you were graduating right now in compsci you’ll find a wasteland with no end in sight…
3 replies →
It's not subtle.
But the temptation of easy ideas cuts both ways. "Oldsters hate change" is a blanket dismissal, and there are legitimate concerns in that body of comments.
>It's astounding how subtly anti-AI HN has become over the past year, as the models keep getting better and better. It's now pervasive across nearly every AI thread here.
I don't think you can characterise it as a sentiment of the community as a whole. While every AI thread seems to have it's share of AI detractors, the usernames of the posters are becoming familiar. I think it might be more accurate to say that there is a very active subset of users with that opinion.
This might hold true for the discourse in the wider community. You see a lot of coverage about artists outraged by AI, but when I speak to artists they have a much more moderate opinion. Cautious, but intrigued. A good number of them are looking forward to a world that embraces more ambitious creativity. If AI can replicate things within a standard deviation of the mean, the abundance of that content there will create an appetite for something further out.
hello faster horses
Oh no, it's the lowercase people again.