Comment by lordnacho
18 days ago
The real question is whether the boom is, economically, a mistake.
If AI is here to stay, as a thing that permanently increases productivity, then AI buying up all the electricians and network engineers is a (correct) signal. People will take courses in those things and try to get a piece of the winnings. Same with those memory chips that they are gobbling up, it just tells everyone where to make a living.
If it's a flash in a pan, and it turns out to be empty promises, then all those people are wasting their time.
What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.
"If X is here to stay, as a thing that permanently increases productivity" - matches a lot of different X. Maintaining persons health increases productivity. Good education increases productivity. What is playing out now is completely different - it is both irresistible lust for omniscient power provided by this technology ("mirror mirror on the wall, who has recently thought bad things about me?"), and the dread of someone else wielding it.
Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things
> Plus, it makes natural moat against masses of normal (i.e. poor) people, because requires a spaceship to run. Finally intelligence can also be controlled by capital the way it was meant to, joining information, creativity, means of production, communication and such things
I'd put intelligence in quotes there, but it doesn't detract from the point.
It is astounding to me how willfully ignorant people are being about the massive aggregation of power that's going on here. In retrospect, I don't think they're ignorant, they just haven't had to think about it much in the past. But this is a real problem with very real consequences. Sovereignty must be occasionally be asserted, or someone will infringe upon it.
That's exactly what's happening here.
>massive aggregation of power that's going on here
Which has been happening since what at least the bad old IBM days and nobody's done a thing about it?
I've given up tbh. It's like the apathetic masses want the billionaires to become trillionaires as long as they get their tiktok fix.
7 replies →
The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental; if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas. You can push the front forward slightly with new research and innovation, but not very fast or far.
The current generation of AI is an opportunity for quick gains that go beyond just a few months longer lifespan or a 2% higher average grade. It is an unrealised and maybe unrealistic opportunity, but it's not just greed and lust for power that pushes people to invest, it's hope that this time the next big thing will make a real difference. It's not the same as investing more in schools because it's far less certain but also has a far higher alleged upside.
> The difference is that we've more or less hit a stable Pareto front in education and healthcare.
Not even close. So many parts of the world need to be pumped with target fund infusions ASAP. Only forcing higher levels of education and healthcare at the places where it lags is a viable step towards securing peaceful and prosperous nearest future.
2 replies →
Pareto is irrelevant, because they are talking about how to use all of this money not currently used in healthcare or education.
5 replies →
> if you pour more money into one place and less into another, you generally don't end up much better off, although you can make small but meaningful improvements in select areas
"Marginal cost barrier" hit, then?
> The difference is that we've more or less hit a stable Pareto front in education and healthcare. Gains are small and incremental;
You probably mean gains between someone receiving healtcare and education now, as compared to 10 years ago, or maybe you mean year to year average across every man alive.
You certainly do not mean that person receiving appropriate healthcare is only 2% better off than one not receiving it, or educated person is noly 2% better of than an uneducated one?
Because I find such notion highly unlikely. So, here you have vast amounts of people you can mine for productivity increase, simply by providing things that exist already and are available in unlimited supply to anyone who can produce money at will. Instead, let's build warehouses and fill them with obsolete tech, power it all up using tiny Sun and .. what exactly?
This seems like a thinly disguised act of an obsessed person that will stop at nothing to satisfy their fantasies.
1 reply →
> Finally intelligence can also be controlled by capital
The relationship between capital and AI is a fascinating topic. The contemporary philosopher who has thought most intensely about this is probably Nick Land (who is heavily inspired by Eugen von Böhm-Bawerk and Friedrich Hayek). For Land, intelligence has always been immanent in capitalism and capitalism is actively producing it. As we get closer to the realization of capitalism's telos/attractor (technological singularity), this becomes more and more obvious (intelligible).
In 2024, global GDP was $111 trillion.[1] Investing 1 or 2 % of that to improve global productivity via AI does not seem exaggerated to me.
[1] https://data.worldbank.org/indicator/NY.GDP.MKTP.CD
2% is a lot! There's only fifty things you can invest 2% of GDP in before you occupy the entire economy. But the list of services people need, from food, water, shelter, heating, transportation, education, healthcare, communications, entertainment, mining, materials, construction, research, maintenance, legal services... there's a lot of things on that list. To allocate each one 1% or 2% of the economy may seem small, but pretty quickly you hit 100%.
Most of you have mentioned is not investment, but consumption. Investments means to use money to make more money in the future. Global investment rates are around 25 % of global GDP. Avarage return on investement ist about 10% per year. In other words: using 1% or 2% of GDP if its leads to an improvement in GDP of more than 0.1% or 0.2% next year would count as a success. I think to expect a productivity gain on this scale due to AI is not unrealistic for 2026.
1 reply →
AI is a big deal though.
1 reply →
I will put it differently,
Investing 1 or 2% of global GDP to increase wealth gap 50% more and make top 1% unbelievable rich while everyone else looking for jobs or getting 50 year mortgage, seem very bad idea to me.
This problem is not specific to AI, but a matter of social policy.
For example here in Germany, the Gini index, an indicator of equality/inequality has been oszillating about 29.75 +/-1.45 since 2011.[1] In other words, the wealth distribution was more or less stable in the last 15 years, and is less extrem than in the USA, where it was 41.8 in 2023.[2]
[1] https://de.statista.com/statistik/daten/studie/1184266/
[2] https://fred.stlouisfed.org/series/SIPOVGINIUSA
It can be both? Both that inequality increases but also prosperity for the lower class? I don’t mind that trade off.
If some one were to say to you - you can have 10,000 more iPhones to play with but your friends would get 100,000 iPhones, would you reject the deal?
21 replies →
Just being born in the US already makes you a top 10% and very likely top 5-1% in terms of global wealth. The top 1% you're harping about is very likely yourself.
5 replies →
>Investing 1 or 2% of global GDP to increase wealth gap 50% more
What’s your definition of wealth gap?
Is it how you feel when you see the name of a billionaire?
2 replies →
It’s implied you mean that the ROI will be positive. Spending 1-2% of global GDP with negative ROI could be disastrous.
I think this is where most of the disagreement is. We don’t all agree on the expected ROI of that investment, especially when taking into account the opportunity cost.
I suspect a lot of this is due to large amounts of liquidity sloshing around looking for returns. We are still dealing with the consequences of the ZIRP (Zero Interest Rate Policy) and QE (Quantitative Easing) where money to support the economy through the Great Financial Crisis and Covid was largely funneled in to the top, causing the 'everything bubble'. The rich got (a lot) richer, and now have to find something to do with that wealth. The immeasurable returns promised by LLMs (in return for biblical amounts of investment) fits that bill very well.
They still gotta figure out how their consumers will get the cash to consume. Toss all the developers and a largish cohort of well-paid people head towards the dole.
Yeah I don't think this get's enough attention. It still requires a technical person to use these things effectively. Building coherent systems that solve a business problem is an iterative process. I have a hard time seeing how an LLM could climb that mountain on it's own.
I don't think there's a way to solve the issue of: one-shotted apps will increasingly look more convincing, in the same way that the image generation looks more convincing. But when you peel back the curtain, that output isn't quite correct enough to deploy to production. You could try brute-force vibe iterating until it's exactly what you wanted, but that rarely works for anything that isn't a CRUD app.
Ask any of the image generators to build you a sprite sheet for a 2d character with multiple animation frames. I have never gotten one to do this successfully in one prompt. Sometimes the background will be the checkerboard png transparency layer. Except, the checkers aren't all one color (#000000, #ffffff), instead it's a million variations of off-white and off-black. The legs in walking frames are almost never correct, etc.
And even if they get close - as soon as you try to iterate on the first output, you enter a game of whack-a-mole. Okay we fixed the background but now the legs don't look right, let's fix those. Okay great legs are fixed but now the faces are different in every frame let's fix those. Oh no fixing the faces broke the legs again, Etc.
We are in a weird place where companies are shedding the engineers that know how to use these things. And some of those engineers will become solo-devs. As a solo-dev, funds won't be infinite. So it doesn't seem likely that they can jack up the prices on the consumer plans. But if companies keep firing developers, then who will actually steer the agents on the enterprise plans?
> It still requires a technical person to use these things effectively.
I feel like few people critically think about how technical skill gets acquired in the age of LLMs. Statements like this kind of ignore that those who are the most productive already have experience & technical expertise. It's almost like there is a belief that technical people just grow on trees or that every LLM response somehow imparts knowledge when you use these things.
I can vibe code things that would take me a large time investment to learn and build. But I don't know how or why anything works. If I get asked to review it to ensure it's accurate, it would take me a considerable amount of time where it would otherwise just be easier for me to actually learn the thing. Feels like those most adamant about being more productive in the age of AI/LLMs don't consider any of the side effects of its use.
1 reply →
> But when you peel back the curtain, that output isn't quite correct enough to deploy to production
What if, we change current production environments to fit that blackbox and make it run somehow with 99% availability and good security?
esp when it comes down to integration with the rest of the business processes & people aroud this "single apps" :-)
At some point rich people stop caring about money and only care about power.
It's a fun thought, but you know what we call those people? Poor. The people who light their own money on fire today are ceding power. The two are the same.
4 replies →
Why do we need people to consume when we have the government?
Serious question. As in, we built the last 100 years on "the american consumer", the idea that it would be the people buying everything. There is no reason that needs to or necessarily will continue-- don't get me wrong, I kind of hope it does, but my hopes don't always predict what actually happens.
What if the next 100 is the government buying everything, and the vast bulk of the people are effectively serfs. Who HAVE to stay in line otherwise they go to debt prison or tax prison where they become slaves (yes, the US has a fairly large population of prison laborers who are forced to work for 15-50 cents/hour. The lucky ones can earn as much as $1.50/hour. https://www.prisonpolicy.org/blog/2017/04/10/wages/
where will the government get the money to buy anything if the billionaires and their mega corps have it all and spend sufficient amounts to keep the government from taxing. we have a k shape economy where the capital class is extracting all of the value from the working class who are headed to subsistence levels of income and the low class dies in the ditch.
Like before - debt!
This prevents the consumers from slacking off and enjoying life, instead they have to continue to work work work. They get to consume a little, and work much more (after all, they also have to pay interest, and for consumer credits and credits that the masses get that adds up to a lot).
In this scenario, it does not even matter that many are unable to pay off all that debt. As long as the amount of work that is extracted from them significantly exceeds the amount of consumption allowed to them all is fine.
The chains that bind used to be metal, but we progressed and became a civilized society. Now it's the financial system and the laws. “The law, in its majestic equality, forbids rich and poor alike to sleep under bridges, to beg in the streets, and to steal their bread.” (Anatole France)
I know that all investments have risk, but this is one risky gamble.
US$700 billion could build a lot of infrastructure, housing, or manufacturing capacity.
There is no shortage of money to build housing. There is an abundance of regulatory burdens in places that are desirable to live in.
Its not due to a lack of money that housing in SF is extremely expensive.
SF is not the only place where housing is expensive. There are plenty of cities where they could build more housing and they don't because it isn't profitable or because they don't have the workers to build more, not because the government is telling them they can't.
3 replies →
> US$700 billion could build a lot of infrastructure, housing, or manufacturing capacity.
I am now 100% convinced, that the US has power to build those things, but it will not, because it means lives of ordinary people will be elevated even more, this is not what brutal capitalism wants.
If it can make top 1% richer in 10 year span vs good for everyone in 20 years, it will go with former
What $700 billion can't do is cure cancers, Parkinsons, etc. We know because we've tried and that's barely a sliver of what it's cost so far, for middling results.
Whereas $700 billion in AI might actually do that.
Your name is well earned! "can't cure cancers" is impressively counterfactual [0] as 5 year survival of cancer diagnosis is up over almost all categories. Despite every cancer being a unique species trying to kill you, we're getting better and better at dealing with them.
[0]https://www.cancer.org/research/acs-research-news/people-are...
22 replies →
"The real question is whether the boom is, economically, a mistake."
The answer to this is two part:
1. Have we seen an increase in capability over the last couple of years? The answer here is clearly yes.
2. Do we think that this increase will continue? This is unknown. It seems so, but we don't know and these firms are clearly betting that it will.
1a. Do we think that with existing capability that there is tremendous latent demand? If so the buildout is still rational if progress stops.
There's one additional question we could have here, which is "is AI here to stay and is it net-positive, or does it have significant negative externalities"
> What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.
We've so far found two ways in recent memory that our economy massively fails when it comes to externalities.
Global Warming continues to get worse, and we cannot globally coordinate to stop it when the markets keep saying "no, produce more oil, make more CO2, it makes _our_ stock go up until the planet eventually dies, but our current stock value is more important than the nebulous entire planet's CO2".
Ads and addiction to gambling games, tiktok, etc also are a negative externality where the company doing the advertising or making the gambling game gains profit, but at the expense of effectively robbing money from those with worse impulse control and gambling problems.
Even if the market votes that AI will successfully extract enough money to be "here to stay", I think that doesn't necessarily mean the market is getting things right nor that it necessarily increases productivity.
Gambling doesn't increase productivity, but the market around kalshi and sports betting sure indicates it's on the rise lately.
> People will take courses in those things and try to get a piece of the winnings.
The problem is boom-bust cycles. Electricians will always be in demand but it takes about 3 years to properly train even a "normal" residential electrician - add easily 2-3 years on top to work on the really nasty stuff aka 50 kV and above.
No matter what, the growth of AI is too rapid and cannot be sustained. Even if the supposed benefits of AI all come true - the level of growth cannot be upheld because everything else suffers.
> it takes about 3 years to properly train even a "normal" residential electrician
To pass ordinary wire with predefined dimensions in exposed conduits? No way it takes more than a few weeks.
It’s protected by requiring many hours (years) of apprenticeship. These kinds of heavily unionized jobs only reward seniority. Gotta pay your dues buddy!
I'm talking about proper German training, not the kind of shit that leads to what Cy Porter (the home inspector legend) exposes on Youtube.
Shoddy wiring can hold up for a looong time in homes because outside of electrical car chargers and baking ovens nothing consumes high current over long time and as long as no device develops a ground fault, even a lack of a GFCI isn't noticeable. But a data center? Even smaller ones routinely rack up megawatts of power here, large hyperscaler deployments hundreds of megawatts. Sustained, not peak. That is putting a lot of stress on everything involved: air conditioning, power, communications.
And for that to hold up, your neighbor Joe who does all kinds of trades as long as he's getting paid in cash won't cut it.
> What we really want to ask ourselves is whether our economy is set up to mostly get things right, or it is wastefully searching.
I can’t speak to the economy as a whole, but the tech economy has a long history of bubbles and scams. Some huge successes, too—but gets it wrong more often than it gets it right.
AI could be here to stay and "chase a career as an electrician helping build datacenters" could also be a mistake. The construction level could plateau or decline without a bubble popping.
That's why it can't just be a market signal "go become an electrician" when the feedback loop is so slow. It's a social/governmental issue. If you make careers require expensive up-front investment largely shouldered by the individuals, you not only will be slow to react but you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.
> you'll also end up with scores of people who "correctly" followed the signals right up until the signals went away.
I think this is where we're headed, very quickly, and I'm worried about it from a social stability perspective (as well as personal financial security of course). There's probably not a single white-collar job that I'd feel comfortable spending 4+ years training for right now (even assuming I don't have to pay or take out debt for the training). Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.
Lots and lots of people who did or will do "all the right things," with no benefit earned from it. Even if hypothetically there is something new you can reskill into every five years, how is that sustainable? If you're young and without children, maybe it is possible. Certainly doesn't sound fun, and I say this as someone who joined tech in part because of how fast-paced it was.
> Many people are having skills they spent years building made worthless overnight, without an obvious or realistic pivot available.
I'd like to see real examples of this, beyond trivial ones like low-quality copywriting (i.e. the "slop" before there was slop) that just turns into copyediting. Current AI's are a huge force multiplier for most white-collar skills, including software development.
> If AI is here to stay, as a thing that permanently increases productivity,
Thing is, I am still waiting to see where it increases productivity aside from some extremely small niches like speech to text and summarizing some small text very fast.
Serious question, but have you not used it to implement anything at your job? Admittedly I was very skeptical but last sprint in 2 days I got 12 pull requests up for review by running 8 agents on my computer in parallel and about 10 more on cloud VMs. The PRs are all double reviewed and QA'd and merged. The ones that don't have PRs are larger refactors, one 40K loc and the other 30k loc and I just need actual time to go through every line myself and self-test appropriately, otherwise it would have been more stuff finished. These are all items tied to money in our backlog. It would have taken me about 5 times as long to close those items out without this tooling. I also would have not had as much time to produce and verify as many unit tests as I did. Is this not increased productivity?
So you roll a dice and call yourself a software engineer, basically.
1 reply →
> I am still waiting to see where it increases productivity...
If you are a software engineer, and you are not using using AI to help with software development, then you are missing out. Like many other technologies, using AI agents for software dev work takes time to learn and master. You are not likely to get good results if you try it half-heartedly as a skeptic.
And no, nobody can teach you these skills in a comment in an online forum. This requires trial and error on your part. If well known devs like Linus Torvalds are saying there is value here, and you are not seeing it, then then the issue is not with the tool.
These are definitely skills I don't want to have, don't worry.
Are you doctor or a farmer?
If you are a software engineer you are missing out a lot, literally a lot!
What is he missing? Do you have anything quantitative other than an AI marketing blog or an anecdote?
2 replies →
Your comment doesn't say anything
If