Anecdotally, our company's next couple quarters are projected to be a bloodbath. Spending is down everywhere, nearly all of our customers are pushing for huge cuts to their contracts, and in turn literally any costs we can jettison to keep jobs is being pushed through. We're hearing the same from our customers.
AI has been the only new investment our company has made (half hearted at that). I definitely get the sense that everyone pretending things are fine to investors, meanwhile they are playing musical chairs.
Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons: On one hand, the economy is legitimately growing and shares are becoming more valuable. But on the other hand, people and corporations could be cutting spending en masse so there's extra cash to flood the stock markets and drive up prices regardless of future earnings.
I work for one of the largest packaging companies in the world. Customers across the board in the US are cutting back on how much packaging they need due to presumably lower sales volume. Make of that information what you will.
My eBay sales have been way down this year too and so far q4 is not looking good at all. People are cutting back across the board and it’s going to be very ugly once wall street stops plugging their ears and covering their eyes.
This is an indicator that is very close to the sale time. If you can share and don't mind sharing, how did whatever you saw during 2020/2021 corelate with retail sales?
car manufacturers, right at the beginning of covid, started cutting orders of components from their suppliers, thinking that demand is going to drop due to covid induced recession.
> Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons
Reason #1 is lower interest rates, which increase the present value of future cash flows in DCF models. A professor who does not mention that does not know what they are talking about.
Likewise on all downward business signals at my employer. I was thankfully in school during 09, but this easily feels like the biggest house of cards I have ever experienced as an adult.
Econ professor espousing on the stock market lol. As someone with a couple dumb fancy Econ degrees and whose career has been quite good in the stock market, this comment made me laugh.
Almost all my money goes to mortgage, shit from China, food, and the occasional service. It does make me wonder some times how it all works. But it's been working like this for a long time now.
Real estate. The US economy floats on the perpetually-increasing cost of land. Thats where your mortgage money goes, to a series of finacial instruments to allow others to benifit from the eternally rising value of "your" property.
Pretty sure the stagnation has a cause beginning in 2025 and that has to do with things like: Canada refusing to buy ALL American liquor in retaliation. China refusing to buy ANY soy beans in retaliation. In retaliation for what you might ask?
I leave that as an exercise for the reader. If you are unable to answer that question honestly to yourself you need to seriously consider that your cognitive bias might be preventing you from thinking clearly.
depends, on which side, of the tarrifs an economy happens to be
and where, geopoliticaly.
AI, or whatever a mountain of processors churning all of the worlds data will be called later, still has no use case, other than total domination, for which it has brought a kind of lame service to all of the totaly dependent go along to get along types, but nothing approaching an actual guaranteed answer for anything usefull and profitable, lame, lame, infinite fucking lame tedious shit
that has prompted most people to.stop.even trying, and so a huge vast amount of genuine human
inspiration and effort is gone
I think that this concern is valid but there are deeper more foundational issues facing the US that have led to the sum of the issues mentioned in the post.
We can say that if this rotten support beam fails the US is in trouble but the real issue is what caused the rot in the first place.
The effective removal of regulations via winner bribes and a lack of enforcement plus the explicit removal of regulations, to reduce corruption and insider trading. AI is not required to create the systemic exploitations and they are far more efficient at extracting value than any AI system.
I think a better metaphor for interconnected economies is that of chains always breaking at their weakest link.
Sure, well done, your link in the chain didn't break… but your anchor is still stuck on the bottom of the ocean and you're on your spare anchor (with a shorter chain) until you get back to harbour.
> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast that we are way out over our skis in terms of what is being promised, which, in the realm of exciting new field of academic research is pretty low-stakes all things considered... to being terrifying when we bet policy and economics on it.
That isn't overly prescient or anything... it feels like the alarm bells started a while ago... but wow the absolute "all in" of the bet is really starting to feel like there is no backup. With the cessation of EVs tax credits, the slowdown in infra spending, healthcare subsidies, etc, the portfolio of investment feels much less diverse...
Especially compared to China, which has bets in so many verticals, battery tech, EVs, solar, then of course all the AI/chips/fabs. That isn't to say I don't think there are huge risks for China... but geez does it feel like the setup for a big shift in economic power especially with change in US foreign policy.
I'll offer two counter-points. Weak but worth mentioning. wrt China there's no value to extract by on-shoring manufacturing -- many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering. I think there's a level of introspection the US needs to contend with, but that ship has sailed. We should be forward looking in what we can do outside of manufacturing.
For AI, the pivot to profitability was indeed quick, but I don't think it's as bad as you may think. We're building the software infrastructure to accomodate LLMs into our work streams which makes everyone more efficient and productive. As foundational models progress, the infrastructure will reap the benefits a-la moore's law.
I acknowledge that this is a bullish thesis but I'll tell you why I'm bullish: I'm basically a high-tech ludite -- the last piece of technology I adopted was google in 1996. I converted from vim to vscode + copilot (and now cursor.) because of LLMs -- that's how transformative this technology is.
> which makes everyone more efficient and productive
There is something bizarre about an economic system that pursues productivity for the sake of productivity even as it lays off the actual participants in the economic system
An echo of another commenter who said that its amazing that AI is now writing comments on the internet
Which is great, but it actively makes the internet a worse place for everyone and eventually causes people to simply stop using your site
Somewhat similar to AI making companies more productive - you can produce more than ever, but because you’re more productive, you don’t hire enough and ultimately there aren’t enough people to consume what you produce
> many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering.
I think this is covered in a number of papers from think tanks related to the current administration.
The overall plan, as I understood it, is to devalue the dollar while keeping the monetary reserve status. A weaker dollar will make it competitive for foreign countries to manufacture in the US. The problem is that if the dollar weakens, investors will fly away. But the AI boom offsets that.
For now it seems to work: the dollar lost more than 10% year to date, but the AI boom kept investors in the US stock market. The trade agreements will protect the US for a couple years as well. But ultimately it's a time bomb for the population, that will wake up in 10 years with half their present purchasing power, in non dollar terms.
I think an interesting way to measure the value is to argue "what would we do without it?"
If we removed "modern search" (Google) and had to go back to say 1995-era AltaVista search performance, we'd probably see major productivity drops across huge parts of the economy, and significant business failures.
If we removed the LLMs, developers would go back to Less Spicy Autocomplete and it might take a few hours longer to deliver some projects. Trolls might have to hand-photoshop Joe Biden's face onto an opossum's body like their forefathers did. But the world would keep spinning.
It's not just that we've had 20 years more to grow accustomed to Google than LLMs, it's that having a low-confidence answer or an excessively florid summary of a document are not really that useful.
Another thing to note about China: while people love pointing to their public transit as an example of a country that's done so much right, their (over)investment in this domain has led to a concerning explosion of local government debt obligations which isn't usually well-represented in their overall debt to GDP ratios many people quote. I only state that to state that things are not all the propaganda suggests it might be in China. The big question everyone is asking is, what happens after Xi. Even the most educated experts on the matter do not have an answer.
I, too, don't understand the OP's point of quickly pivoting to value extraction. Every technology we've ever invented was immediately followed by capitalists asking "how can I use this to make more money". LLMs are an extremely valuable technology. I'm not going to sit here and pretend that anyone can correctly guess exactly how much we should be investing into this right now in order to properly price how much value they'll be generating in five years. Except, its so critical to point out that the "data center capex" numbers everyone keeps quoting are, in a very real (and, sure, potentially scary) sense, quadruple-counting the same hundred-billion dollars. We're not actually spending $400B on new data centers; Oracle is spending $nnB on Nvidia, who is spending $nnB to invest in OpenAI, who is spending $nnB to invest in AMD, who Coreweave will also be spending $nnB with, who Nvidia has an $nnB investment in... and so forth. There's a ton of duplicate-accounting going on when people report these numbers.
It doesn't grab the same headlines, but I'm very strongly of the opinion that there will be more market corrections in the next 24 months, overall stock market growth will be pretty flat, and by the end of 2027 people will still be opining on whether OpenAI's $400B annual revenue justifies a trillion dollars in capex on new graphics cards. There's no catastrophic bubble burst. AGI is still only a few years away. But AI eats the world none-the-less.
>> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast
lol, I read this few hours ago, maybe without enough caffeine but I read it as "my comment from 70 *years* ago" because I thought you somehow where at the The Dartmouth Summer Research Project on Artificial Intelligence 1956 workshop!
I somehow thought "Damn... already back there, at the birth of the field they thought it was too fast". I was entirely wrong and yet in some convoluted way maybe it made sense.
Gotta thank AI — it’s keeping my portfolio from collapsing, at least for now . But yeah, I totally see the point: AI investment might be one of the few things holding up the U.S. economy, and it doesn’t even have to fail spectacularly to cause trouble. Even a “slightly disappointing” AI wave could ripple across markets and policy.
> more than a fifth of the entire S&P 500 market cap is now just three companies — Nvidia, Microsoft, and Apple — two of which are basically big bets on AI.
These 3 companies have been heavyweights since long before AI. Before AI, you couldn't get Nvidia cards due to crypto, or gaming. Apple is barely investing in AI. Microsoft has been the most important enterprise tech company for my entire lifetime.
Nvidia market cap has increased about 10x since the crypto-shortage years. It wasn't small before, but there's a big difference between ~1% of the market and ~10% of the market in terms of systemic risk.
Also, as of last year about 80% of their revenues were from data center GPUs designed specifically for "AI", and that's undoubtedly continuing to grow as a share of their revenues.
You’re missing the point. Whether one buys it or not to one side, the author is saying those companies, whatever their history have pushed a significant amount of their … chips into a bet on AI.
One of my most frustrating things regarding the potential of an AI bubble was some very smart and intelligent researcher being incredibly bullish on AI on Twitter because if you extrapolate graphs measuring AI's ability to complete long-duration tasks (https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...) or other benchmarks then by 2026 or 2027 then you've basically invented AGI.
I'm going to take his statements at face value and assume that he really does have faith in his own predictions and isn't trying to fleece us.
My gripe with this statement is that this prediction is based on proxies for capability that aren't particularly reliable. To elaborate, the latest frontier models score something like 65% on SWE-bench, but I don't think they're as capable as a human that also scored 65%. That isn't to say that they're incapable, but just that they aren't as capable as an equivalent human. I think there's a very real chance that a model absolutely crushes the SWE-bench benchmark but still isn't quite ready to function as an independent software engineering agent.
So a lot of this bullishness basically hinges on the idea that if you extrapolate some line on a graph into the future, then by next year or the year after all white-collar work can be automated. Terrifying as that is, this all hinges on the idea that these graphs, these benchmarks, are good proxies.
There's a huge disconnect between what the benchmarks are showing and what the day-to-day experience of those of us using LLMs are experiencing. According to SWE-bench, I should be able to outsource a lot of tasks to LLMs by now. But practically speaking, I can't get them to reliably do even the most basic of tasks. Benchmaxxing is a real phenomenon. Internal private assessments are the most accurate source of information that we have, and those seem to be quite mixed for the most recent models.
How ironic that these LLM's appear to be overfitting to the benchmark scores. Presumably these researchers deal with overfitting every day, but can't recognize it right in front of them
>> by next year or the year after all white-collar work can be automated
Work generates work. If you remove the need for 50% of the work then a significant amount of the remaining work never needs to be done. It just doesn't appear.
The software that is used by people in their jobs will no longer be needed if those people aren't hired to do their jobs. There goes Slack, Teams, GitHub, Zoom, Powerpoint, Excel, whatever... And if the software isn't needed then it doesn't need to be written, by either a person or an AI. So any need for AI Coders shrinks considerably.
Even in the unlikely event AI somehow delivers on its valuations and thereby doesn't disappoint, the implied negative externalities on the population (mass worker redundancy, inequality that makes even our current scenario look rosy, skyrocketing electricity costs) means that America's and the world's future looks like a rocky road.
I don't think many businesses are at the stage where they can actually measure whatever AI is delivering.
At one business I know they fired most senior developers and mandated junior developers to use AI. Stakeholders were happy as finally they could see their slides in action. But, at a cost of code base being unreadable and remaining senior employees leaving.
So on paper, everything is better than ever - cheap workers deliver work fast. But I suspect in few months' time it will all collapse.
Most likely they'll be looking to hire for complete rewrite or they'll go under.
In the light of this scenario, AI is false economy.
When standard of living increases significantly, inequality often also increases. The economy is not a zero sum game. Having both rising inequality and rising living standards is generally the thing to aim for.
Both parties seem to agree we should build more electric capacity, that does seem like an excellent thing to invest in, why aren't we?
As the cost of material goods decreases, they will become near free. IMO demand for human-produced goods and experiences will increase.
Not necessarily. If $x is enough to get you 10x more Software engineering effort, people may be willing to increase their spending on software engineering, rather than decrease it
Solar is extremely cheap and battery costs are dropping quickly, IMO you may see US neighborhoods, especially rural disconnecting from the grid and rolling their own solutions.
This china rare earth thing may slow down the battery price drop somewhat but not for long because plenty of chemistries don't rely on rare earths, and there will soon be plenty of old EV packs that have some life left in them as part of grid storage.
I personally hope AI doesn't quite deliver on its valuations, so we don't lose tons of jobs, but instead of a market crash, the money will rotate into quantum and crispr technologies (both soon to be trillion dollar+ industries). People who bet big on AI might lose out some but not be wiped out. That's best casing it though.
Other than collapsing the internet when every pre-quantum algorithm is broken (nice jobs for the engineers who need to scramble to fix everything, I guess) and even more uncrackable comms for the military. Drug and chemistry discovery could improve a lot?
And to be quite honest, the prospect of a massive biotech revolution is downright scary rather than exciting to me because AI might be able to convince a teenager to shoot up a school now and then, but, say, generally-available protein synthesis capability means nutters could print their own prions.
Better healthcare technology in particular would be nice, but rather like food, the problem is that we already can provide it at a high standard to most people and choose not to.
Quantum had already peaked in the hype. It doesn't scale, like at all. It can't be used for abstract problems. We don't even know the optimal foundation base on which to start developing. It is now in the fusion territory. Fusion is also objectively useful with immense depth or research potential. It's just humans are too dumb for it, for now and so we will do it at scale centuries later.
Crispr would clash with the religious fundamentalists slowly coming back to power in all western countries. Potentially it will be even banned, like abortions.
I like this, because I hate the idea that we should either be rooting for AI to implode and cause a crash, or for it to succeed and cause a crash (or take us into some neo-feudal society).
"quantum" and "biotech" have been wishful thinking based promises for several years now, much like "artificial intelligence"
we need human development, not some shining new blackbox that will deliver us from all suffering
we need to stop seeking redemption and just work on the very real shortcomings of modern society... we don't even have scarcity anymore but the premise is still being upheld for the benefit of the 300 or so billionaire families...
“Election has consequences.” … Managing a country is a big and critical job. Hard to imagine it was handed over to a bunch of people who barely qualified and hardly care about the future of America.
> In those intervening years, a bunch of AI companies might be unable to pay back their debts.
Dumb question: isn't a lot of the current investment in the form of equity deals and/or funded by existing tech company profit lines? What do we actually know about the debt levels and schedules of the various actors?
Google, Meta, and Microsoft are funding AI out of their existing profits so they will probably be fine. The others may be getting GPUs in exchange for equity but they still have to pay real money for the datacenters, generators, etc. That real money is borrowed and they would default in case of a crash. Potentially hundreds of billions of defaults.
Borrowing money got expensive... the Fed rate is largely responsible for that and there's been a big push to adjust it. As it stands, during and since COVID a lot of people maxed out their credit, or significantly increased their debt for a number of reasons, from home/household needs during the shutdowns to increased cost of living (starting with overpriced groceries).
This has taken an effect. A lot of people are strapped and no longer participating in larger purchases beyond the basic needs as just those have gone up in price so much relatively to income. Initially a lot of it was just greed and taking advantage of the pandemic as an excuse, now people are genuinely stretched thin.
This is how I'm starting to view many of these things. It's just that the metrics we use to evaluate the economy are getting out of sync. For instance, if "consumer sentiment is at Great Recession levels", why do we need some other indicator to accept that there's a problem? Isn't that a bad thing on its own?
"Bad" is a judgment call. Trump approval ratings haven't dipped that far, so Congressional Republicans won't dare abandon him and there's not much political will for change.
It might change if we get into millions of foreclosures like the great recession and the pain really hits home. From what I can tell right now they're in wartime mode where they just need to buckle down until Trump wins and makes other countries pay for tariffs or something.
We're definitely not in a crash yet, but it does feel like we're the roller coaster just tipping over the peak: unemployment is rising for the first time in a couple years, there's basically no GDP growth apart from AI investment, and the yield curves look scary. The crash could be any second now, especially because tech earnings week is coming up and that could indicate how much revenue, or lack thereof, the AI investment is bringing in.
So the crash is only official once Wall Street's exuberance matches the economy as perceived by it's workforce? Is that a crash or just a latent arrival of the signal itself?
Yeah; in general it's very difficult to detect the economy going off the rails _in real time_; it tends to be clearly visible only afterwards. It's entirely possible we're already past the point of no return; this cycle's equiv of the Lehman Brothers collapse (if there even is such a clear signal, and there isn't always) could happen tomorrow.
US uniquely is suited to maximally benefit startups emerging in a new space, but maximally prevent startups entering a mature space. No smart, young person in the US matriculates into industries paved over by incumbents as they wisely anticipate that they will be in an industry deliberately hamstrung by regulatory capture.
All growth is in AI now because that's where all the smartest people are going. If AI were regulated significantly, they'd go to other industries and they would be growing faster (though not as much likely)
However there is the broader point that AI is poised offer extreme leverage to those who win the AGI race, justifying capex spending on such absurd margins.
I think there should be regulation that protects individuals and limits the power of incumbents. There are many regulations that ostensibly protect individuals but only exist to empower incumbents.
It feels like the only other big near term tech bet is Meta and their glasses.
It would be pretty fun to see AI fizzle and AR glasses take off. Not a huge zuck/Meta fan but I really do appreciate the actual big bet on something else.
Waymo is a pretty big bet. though it wouldn't be fun if unemployed people were to fall back to being Uber/Lyft drivers, but get outcompeted by AI cars.
The question is this: At what point will the market and economy stop looking forward at what AI promises to do and start looking backwards at what it has done? We’ve had this technology for three years and what it has done for us amounts to very little more than pollute every form of communication with low value, mass produced drivel and destroyed the ability of teachers to evaluate their students. We have no new medicine, no new math, no new physics in any meaningful way. Yet all eyes are still on the carrot that we’ll never reach: AGI. When will we realize that even the name itself, artificial intelligence, is a lie? It’s just a database of most human knowledge with a very intuitive human language interface, but knowledge and intelligence are not the same thing. At some point, the world will be forced to acknowledge what little it has received in return for its misplaced faith.
Don't use P/E as an estimate of a bubble... profits often mean revert.
Use something closer to market value to GDP (with some adjustments). That is a much better estimate. John Hussman has imo the best of such metrics. Here's his thoughts from August: https://www.hussmanfunds.com/comment/mc250814/
Yes. A lot now depends on when "growth" stops. But GDP is very hard to grow at an sustained high rate justifying higher valuations. Even through industrial revolutions, assembly lines, transistors, and the internet.
At the end of the day, if you look at almost any government, roughly 2/3 of expenses go towards healthcare and education things which, AI worlkflow are very likely continue offsetting a larger and larger percentage of the costs on.
Can we still have a financial crisis from all this investment going bust because it might take too long for it to make a difference in manufacturing enough automation hardware for everyone? Yes.
But, the fundamentals are still there, parents will still send their kids to some type of school, and people will trade good in exchange for health services. That's not going to change. Neither will the need to use robots in nursing homes, I think that assumption is safe to make.
What's difficult to predict change in is adoption in manufacturing, and repairs ( be that repairing bridges or repairing your espresso machine ) because that is more of a "3D" issue and hard to automate reliably (think about how many gpus today would it actually take to get a robot to reason out and repair a whole in your drywall), given that your RL environments and training data needs grow exponentially. Technically, your phone should have enough gpu performance to do your taxes with a 3B model and a bunch of tools, eventually it'll even be better than you at it. But to tun an actual robot with multiple cameras and stuff doing troubleshooting and decision making.... you're gonna need a whole 8x rack of gpus for that.
And that's what makes it now difficult to predict what's going to happen. The areas under the curve can vary widely. We could get a 1B AGI model in 6 months, or it could take 5 years for agentic workflows to fully automate everyones taxes and actually replace 2/3 of radiology work...
Either way, while theres a significant chance of this transition to the automation age being rough, I am overall quite optimistic given the fundamentals of what governments actually spend majority of their money on.
I wouldn't even call it political. It's financial, and should be criminal. The people who are elected to represent us are just taking bribes and being paid off to allow corporations to screw us over.
Talk to an educator.
Education is being actively harmed by AI. Kids don’t want to do any difficult thinking work so they aren’t learning. (Literally any teacher you talk to will confirm this)
AI in medicine is challenging because AI is bad at systems thinking, citation of fact and data privacy. Three things that are absolutely essential for medicine. Also everything for healthcare needs regulatory approval so costs go up and flexibility goes down. We’re ten years away from any AI for medicine being cost effective.
Having an AI do your taxes is absurd. They regularly hallucinate. I 100% guarantee that if you do your taxes with AI you won’t pass an audit. AI literally can’t count. You’re be better off asking it to vibecode a replacement for TurboTax. But again the product won’t be AI it will be traditional code.
Trying for AGI down the road of an LLM is insanity sauce. It’s a simulated language center that can’t count, it can’t do systems thinking. It can’t cite known facts. We’re not six months away we’re a decade or a “cost effective fusion” distance (defined as perpetually 20 years in the future from any point in time)
There are at least six Silicon Valley startups working on AGI. Not a single one of them has published an architecture strategy that might work. None of the “almost AGI” products that have ever come out have a path to AGI.
Meh is the most likely outcome. I say this as someone who uses it a lot for things it is good at.
> AI in medicine is challenging because AI is bad at systems thinking, citation of fact and data privacy.
main question is if humans are better than that. I have experiences with doctor: he gave prescription of Xmg, I am asking why, he said because some study said so, I go home, pull study, and it is XXmg there. Doctors can make things up all the time without much consequences and likely do. For AI, corps and community can do all kind of benchmarking and evaluation on industrial scale.
I think if there's a rational reasoning behind Trump unleashing ICE and the national guard on the domestic population, this must be it: "the economy is doing really bad, and we need a smokescreen so people won't talk about it."
Hmmm kinda ties into the whole problem of well-off/happy people not being particularly eager to chant "foreigners out", but when they're desperate they take any explanation for their misery they can get their hands on that sounds workable (because, no, you can't go up to a billionaire and just "take all their stuff", but you CAN beat up a foreigner or other disadvantaged person that is worse off than you)
I think another reason for the recent global rise of anti-immigration parties is also that the relative economic value of immigrants (as unskilled labor) has gone down, and the "costs" (cultural/language friction) have become more visible.
Reminder: If you're going to feel doomer about how tech capex represents like nn% of US GDP growth, you should do some research into what percentage of US GDP growth, especially since 2020, has been the result of government printing. Arguably, our GDP growth right now is more "real" than the historical GDP growth numbers between 2020-2023, but all of it is so warped by policy that its hard to tell what's going on.
We're in extremely unprecedented times. Sometimes maybe good, sometimes maybe shit. The old rules don't apply. Etc.
separate from this article, I don't have a very high opinion of the author. he has an astonishing record of being uninformed and/or just plain wrong in everything I've ever heard him write about.
but as far as this article, the "tech capex as a percentage of GDP growth" is an incredible cherrypicking of statistics to create a narrative... when tech became a boodbath starting in 2022, the rest of the economy continued on strong. all the way until 2025, the rest of the economy was booming while tech layoffs and budget cuts after covid were exploding. so starting that chart in early 2023 when tech had bottomed out (compared to the rest of the economy) is misleading. tech capex as a percentage of the overall GDP has been consistently rising since 2010 - https://gqg.com/highchartal-paper-large-tech-capex-spend-as-...
this is obviously related to the advent of public cloud computing more than anything. why this chart appears to clash with the author's chart is the author's chart specifically calls out just percentage of GDP growth, not overall GDP. so the natural conclusion is that while tech has been in borderline recessionary conditions since 2022, it is now becoming stable (if not recovering) while the rest of the economy that didn't have the post-covid pullback (nor the same boom during covid, of course) is now having issues largely due to geopolitics and global trade.
is there an AI bubble? who cares. it's not as meaningful to the broader economy as these cherrypicking stats imply. if it's a bubble, it represents maybe .3% of the GDP. no one would be screaming from the mountain tops about a shaky economy and a bubble if that same .3% was represented by a bubble in the restaurant industry or manufacturing. in fact, in recent years, those industries DID have inflationary bubbles and it was treated like a positive thing for the most part.
I think a lot of this overanalysis and prodding for flaws in tech is generally an attempt at schadenfreude hoping that tech becomes just another industry like carpentry or plumbing. in particular, hoping for a scenario where tech is not as culturally impactful as it is today. because people are worried and frustrated about the economy, don't understand the value of tech, and hope it stops sucking up so much funding and focus by society in general.
they're not 100% wrong in being untrusting or skeptical of tech. the tech industry hasn't always been the best steward of the wealth and power it possesses. but they are generally wrong about valuations or impact of tech on the economy. like the people spending all this money are clueless. the stock market fell 900 points on friday, wiping out over $1 trillion in value over the course of a couple hours. yet the hundreds of billions invested in datacenters is a sign of impending economic doom.
is the economy good? I don't think it's doing great. but it has little to do with AI one way or another. "AI" is just another trend of making technology more accessible to the masses. no more scarier, complicated, or impactful than microcomputers, DSL, cellular phones, or youtube. and while the economy crashed in 2008, youtube and facebook did well. yet there was none of this dooming about tech specifically at the time simply because the tech industry wasn't as controversial at the time.
He is a partisan hack. During the election last year he consistently posted that gdp growth was real, the economy was booming, and it was all thanks to Biden/Harris. I called him out on it on Twitter, and he was unabashed about being a partisan propagandist. Not surprisingly, now that the politics have changed, the history changes.
Anything he says on any topic should be treated as suspect, and probably best ignored.
The person you're replying to also acknowledged GDP was growing despite the tech layoffs. Is your assertion that GDP wasn't growing in 2024? If so, I'd love to see any evidence.
There's a lot of people who can only process their own failures by assuming that everyone and everything must also, eventually fail; that anything successful is temporary and "not real". And there's a lot of down people in the tech industry right now; we're in a recession, after all.
There's also a significant number of people (e.g. Doctorow) who have made their entire brand on doomerism; and whether they actually believe what they say or have just become addicted to the views is an irrelevant implementation detail.
The anti-AI slop that dominates HackerNews doesn't serve anything productive or interesting. Its just an excuse for people to not read the article, go straight to the comments, and parrot the same thing they've parroted twenty times.
You are way too nice with the author, if I were you I’d omit the fake empathy which dilutes your substantial points. The author is hallucinating worse than AI.
So what if other people downvote you for being too critical.
ha I honestly don't have that strong of an opinion of the author because the few tidbits I've seen I didn't even read all the way because the info on the surface was so flawed. this was the first article I've actually read. so I can't say they're malicious or hallucinating because I haven't looked into why they have the opinions they do. but I'm definitely not inclined to trust them, which was why I had to say that I've recognized the pattern of "Noah Smith" (I don't know who they are, where they work, nothing) seems to just ship out their own copy/paste of whatever trendy (and flawed) opinion is hot at the moment
Ah, so I see we've entered the "normalizing the end of presidental term limits" part of the downward spiral. Maybe I need to accelerate my plans to get the fuck out of here.
We're already past the point where there is no meaningful notion of "normal" that actually impacts what happens in government. Normalizing things doesn't matter that much if people care so little that they elect someone who's done what Trump did his first time.
I mean he's selling the hats and I've seen some talking heads on the news say they'll look at ways for him to do it. The two term limit is a kinda recent precedent all things considered, so...
boomers have already agreed multiple times this century that businesses are not allowed to go bankrupt in fear that their retirement portfolios may not be juiced to the gills. So instead we bail everyone out on the taxpayers dime and leave the debt for some poor schmuck in the future to figure out.
It (was) also settled precedent that he can't stop spending money required to be spent by Congress (settled during Nixon's term), but the supremes decided it's different now. Same for firing heads of supposed independent federal departments, which was supposed to prevent presidential manipulation.
And the s.c. created presidential immunity out of nothing. For now the president has unchecked power, the conservative dream of a unitary executive.
This will all end when a Democrat is in power again. This is not a sarcastic exaggeration, one way they teed this up was shadow docket decisions like the Kavanaugh rule (ice can arrest/kidnap you based on appearance), it's not a precedent as shadow docket so they can reverse it any time.
In the normative sense of "another atrocity like this cannot occur", then yes.
However your comment instead sounds like you are dismissing it as a non-concern... in which case I suggest you wake the heck up. We've had months now of seeing President and his cabinet actively and willfully breaking federal and Constitutional law, with the entire Republican legislature complicit.
It wouldn't even the first time states tried to remove him from their ballots either. [0]
I can’t help but think a lot of these comments are actually written by AI — and that, in itself, showcases the value of AI. The fact that all of these comments could realistically have been written by AI with what’s available today is mind-blowing.
I use AI on a day-to-day basis, and by my best estimates, I’m doing the work of three to four people as a result of AI — not because I necessarily write code faster, but because I cover more breadth (front end, back end, DevOps, security) and make better engineering decisions with a smaller team. I think the true value of AI, at least in the immediate future, lies in helping us solve common problems faster. Though it’s not yet independently doing much, the most relevant expression I can think of is: “Those who cannot do, teach.” And AI is definitely good at relaying existing knowledge.
What exactly is the utility of AI writing comments that seem indistinguishable from people? What is the economic value of a comment or an article?
At present rate, there is a good argument to be made that the economic value is teetering towards negative
A comment on a post or an article on the internet has value ONLY if there are real people at the other end of the screen reading it and getting influenced by it
But if you flood the internet with AI slop comments and articles, can you be 100% sure that all the current users of your app will stick around?
If there are no people to read your articles, your article has zero economic value
Perhaps economic value can come from a more educated and skilled workforce if they're using AI for private tuition (if it can write as well as us, it can provide a bespoke syllabus, feedback etc.)
Automation over teaching sounds terrible in the long run, but I could see why learning languages and skills could improve productivity. The "issue" might be here that there's more to gain in developing nations with poor education standards, and so while capital concentrates more to the US because they own the tech, geographical differences in labour productivity reduces.
What is the economic value of a wheel? If we flood the market with wheels, we’re going to need far fewer sleds and horses. Pretty soon, no one might need horses at all — can you imagine that?
That first sentence is a tautology. The second to last sentence is one of those things it’s ok to think until you learn better, but don’t say that in polite company.
Did AI write all these comments? AI is turning me into a conspiracy theorist? I keep seeing AI is like having a team of 3-4 people, or doing the work of 3-4 people type posts everywhere lately like it's some kind of meme. I don't even know what it means. I don't think you're saying you have 4x'd your productivity? But maybe you are?
Best I can tell, it’s resulting in less churn, which isn’t the same as work getting done faster. Maybe it’s a phenomenon unique to engineering, but what I’m observing isn’t necessarily work getting done faster — it’s that a smaller number of people are able to manage a much larger footprint because AI tools have gotten really good at relaying existing knowledge.
Little things that historically would get me stuck as I switch between database work, front-end, and infrastructure are no longer impeding me, because the AI tools are so good at conveying the existing knowledge of each discipline. So now, with a flat org, things just get done — there’s no need for sprint masters, knowledge-sharing sessions, or waiting on PR reviews. More people means more coordination, which ultimately takes time. In some situations that’s unavoidable, but in software engineering, most of the patterns, tools, and practices are well established; it’s just a matter of using them effectively without making your head explode.
I think this relay of knowledge is especially evident when I can’t tell an AI comment from a human one in a technical discussion — a kind of modern Turing Test, or Imitation Game.
Anecdotally, our company's next couple quarters are projected to be a bloodbath. Spending is down everywhere, nearly all of our customers are pushing for huge cuts to their contracts, and in turn literally any costs we can jettison to keep jobs is being pushed through. We're hearing the same from our customers.
AI has been the only new investment our company has made (half hearted at that). I definitely get the sense that everyone pretending things are fine to investors, meanwhile they are playing musical chairs.
Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons: On one hand, the economy is legitimately growing and shares are becoming more valuable. But on the other hand, people and corporations could be cutting spending en masse so there's extra cash to flood the stock markets and drive up prices regardless of future earnings.
I work for one of the largest packaging companies in the world. Customers across the board in the US are cutting back on how much packaging they need due to presumably lower sales volume. Make of that information what you will.
My eBay sales have been way down this year too and so far q4 is not looking good at all. People are cutting back across the board and it’s going to be very ugly once wall street stops plugging their ears and covering their eyes.
1 reply →
This is an indicator that is very close to the sale time. If you can share and don't mind sharing, how did whatever you saw during 2020/2021 corelate with retail sales?
tariffs could be an explanation.
sometimes volume and total $ are not the same.
car manufacturers, right at the beginning of covid, started cutting orders of components from their suppliers, thinking that demand is going to drop due to covid induced recession.
Guess what happened next?
76 replies →
Short the stock market then if you feel a recession is coming
> Back in my economics classes at college, a professor pointed out that a stock market can go up for two reasons
Reason #1 is lower interest rates, which increase the present value of future cash flows in DCF models. A professor who does not mention that does not know what they are talking about.
That's a subset of "shares becoming more valuable"
1 reply →
Likewise on all downward business signals at my employer. I was thankfully in school during 09, but this easily feels like the biggest house of cards I have ever experienced as an adult.
Econ professor espousing on the stock market lol. As someone with a couple dumb fancy Econ degrees and whose career has been quite good in the stock market, this comment made me laugh.
Why? Everyone knows the stock market can be irrational and disconnected from the economy.
The fact that this is even plausibly true means that the non-AI (and maybe even non-tech) American economy has been stagnating for years by now.
Almost all my money goes to mortgage, shit from China, food, and the occasional service. It does make me wonder some times how it all works. But it's been working like this for a long time now.
Real estate. The US economy floats on the perpetually-increasing cost of land. Thats where your mortgage money goes, to a series of finacial instruments to allow others to benifit from the eternally rising value of "your" property.
13 replies →
Why are you buying shit from China?
22 replies →
Prices going up 20-25% due to excessive money printed and hence high inflation during last administration don't help.
7 replies →
Federal taxes are #1 expense for most people. People forget to think about it because of direct deduction.
1 reply →
Pretty sure the stagnation has a cause beginning in 2025 and that has to do with things like: Canada refusing to buy ALL American liquor in retaliation. China refusing to buy ANY soy beans in retaliation. In retaliation for what you might ask? I leave that as an exercise for the reader. If you are unable to answer that question honestly to yourself you need to seriously consider that your cognitive bias might be preventing you from thinking clearly.
Also EU moving away from US weapons. We're destroying all our exports.
I think the reason could also be a lot of countries and companies started to diversify more and depend less on the USA
The tariff wars certainly didn't help.
depends, on which side, of the tarrifs an economy happens to be and where, geopoliticaly.
AI, or whatever a mountain of processors churning all of the worlds data will be called later, still has no use case, other than total domination, for which it has brought a kind of lame service to all of the totaly dependent go along to get along types, but nothing approaching an actual guaranteed answer for anything usefull and profitable, lame, lame, infinite fucking lame tedious shit that has prompted most people to.stop.even trying, and so a huge vast amount of genuine human inspiration and effort is gone
25 replies →
The fundamentals behind the 2008 financial crisis didn't come from nowhere and the "solution" to 2008 did little more than kick the can down the road.
I think that this concern is valid but there are deeper more foundational issues facing the US that have led to the sum of the issues mentioned in the post.
We can say that if this rotten support beam fails the US is in trouble but the real issue is what caused the rot in the first place.
What are these foundational issues? Whatever issues are there, I feel they are more in other big economies.
Remember, when a bear is chasing you and some others, you don't have to be faster than the bear to escape.
> What are these foundational issues?
The effective removal of regulations via winner bribes and a lack of enforcement plus the explicit removal of regulations, to reduce corruption and insider trading. AI is not required to create the systemic exploitations and they are far more efficient at extracting value than any AI system.
I don't understand the metaphor in this case.
If there's another European debt crisis (for example) does the bear eat Europe and any US issues go away?
I think a better metaphor for interconnected economies is that of chains always breaking at their weakest link.
Sure, well done, your link in the chain didn't break… but your anchor is still stuck on the bottom of the ocean and you're on your spare anchor (with a shorter chain) until you get back to harbour.
I will repeat my comment from 70 days ago:
> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast that we are way out over our skis in terms of what is being promised, which, in the realm of exciting new field of academic research is pretty low-stakes all things considered... to being terrifying when we bet policy and economics on it.
That isn't overly prescient or anything... it feels like the alarm bells started a while ago... but wow the absolute "all in" of the bet is really starting to feel like there is no backup. With the cessation of EVs tax credits, the slowdown in infra spending, healthcare subsidies, etc, the portfolio of investment feels much less diverse...
Especially compared to China, which has bets in so many verticals, battery tech, EVs, solar, then of course all the AI/chips/fabs. That isn't to say I don't think there are huge risks for China... but geez does it feel like the setup for a big shift in economic power especially with change in US foreign policy.
I'll offer two counter-points. Weak but worth mentioning. wrt China there's no value to extract by on-shoring manufacturing -- many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering. I think there's a level of introspection the US needs to contend with, but that ship has sailed. We should be forward looking in what we can do outside of manufacturing.
For AI, the pivot to profitability was indeed quick, but I don't think it's as bad as you may think. We're building the software infrastructure to accomodate LLMs into our work streams which makes everyone more efficient and productive. As foundational models progress, the infrastructure will reap the benefits a-la moore's law.
I acknowledge that this is a bullish thesis but I'll tell you why I'm bullish: I'm basically a high-tech ludite -- the last piece of technology I adopted was google in 1996. I converted from vim to vscode + copilot (and now cursor.) because of LLMs -- that's how transformative this technology is.
> which makes everyone more efficient and productive
There is something bizarre about an economic system that pursues productivity for the sake of productivity even as it lays off the actual participants in the economic system
An echo of another commenter who said that its amazing that AI is now writing comments on the internet
Which is great, but it actively makes the internet a worse place for everyone and eventually causes people to simply stop using your site
Somewhat similar to AI making companies more productive - you can produce more than ever, but because you’re more productive, you don’t hire enough and ultimately there aren’t enough people to consume what you produce
1 reply →
> many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering.
I think this is covered in a number of papers from think tanks related to the current administration.
The overall plan, as I understood it, is to devalue the dollar while keeping the monetary reserve status. A weaker dollar will make it competitive for foreign countries to manufacture in the US. The problem is that if the dollar weakens, investors will fly away. But the AI boom offsets that.
For now it seems to work: the dollar lost more than 10% year to date, but the AI boom kept investors in the US stock market. The trade agreements will protect the US for a couple years as well. But ultimately it's a time bomb for the population, that will wake up in 10 years with half their present purchasing power, in non dollar terms.
4 replies →
I think an interesting way to measure the value is to argue "what would we do without it?"
If we removed "modern search" (Google) and had to go back to say 1995-era AltaVista search performance, we'd probably see major productivity drops across huge parts of the economy, and significant business failures.
If we removed the LLMs, developers would go back to Less Spicy Autocomplete and it might take a few hours longer to deliver some projects. Trolls might have to hand-photoshop Joe Biden's face onto an opossum's body like their forefathers did. But the world would keep spinning.
It's not just that we've had 20 years more to grow accustomed to Google than LLMs, it's that having a low-confidence answer or an excessively florid summary of a document are not really that useful.
10 replies →
Another thing to note about China: while people love pointing to their public transit as an example of a country that's done so much right, their (over)investment in this domain has led to a concerning explosion of local government debt obligations which isn't usually well-represented in their overall debt to GDP ratios many people quote. I only state that to state that things are not all the propaganda suggests it might be in China. The big question everyone is asking is, what happens after Xi. Even the most educated experts on the matter do not have an answer.
I, too, don't understand the OP's point of quickly pivoting to value extraction. Every technology we've ever invented was immediately followed by capitalists asking "how can I use this to make more money". LLMs are an extremely valuable technology. I'm not going to sit here and pretend that anyone can correctly guess exactly how much we should be investing into this right now in order to properly price how much value they'll be generating in five years. Except, its so critical to point out that the "data center capex" numbers everyone keeps quoting are, in a very real (and, sure, potentially scary) sense, quadruple-counting the same hundred-billion dollars. We're not actually spending $400B on new data centers; Oracle is spending $nnB on Nvidia, who is spending $nnB to invest in OpenAI, who is spending $nnB to invest in AMD, who Coreweave will also be spending $nnB with, who Nvidia has an $nnB investment in... and so forth. There's a ton of duplicate-accounting going on when people report these numbers.
It doesn't grab the same headlines, but I'm very strongly of the opinion that there will be more market corrections in the next 24 months, overall stock market growth will be pretty flat, and by the end of 2027 people will still be opining on whether OpenAI's $400B annual revenue justifies a trillion dollars in capex on new graphics cards. There's no catastrophic bubble burst. AGI is still only a few years away. But AI eats the world none-the-less.
[1] https://www.sciencedirect.com/science/article/abs/pii/S09275...
3 replies →
You could go back to vim with claude running in another terminal window
> We should be forward looking in what we can do outside of manufacturing.
For example?
3 replies →
> I will repeat my comment from 70 days ago:
>> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast
lol, I read this few hours ago, maybe without enough caffeine but I read it as "my comment from 70 *years* ago" because I thought you somehow where at the The Dartmouth Summer Research Project on Artificial Intelligence 1956 workshop!
I somehow thought "Damn... already back there, at the birth of the field they thought it was too fast". I was entirely wrong and yet in some convoluted way maybe it made sense.
> but geez does it feel like the setup for a big shift in economic power
It happened ten years ago, it's just that perceptions haven't changed yet.
Gotta thank AI — it’s keeping my portfolio from collapsing, at least for now . But yeah, I totally see the point: AI investment might be one of the few things holding up the U.S. economy, and it doesn’t even have to fail spectacularly to cause trouble. Even a “slightly disappointing” AI wave could ripple across markets and policy.
Dont forget to factor in the 10% haircut the USD has taken since last November.
This might be a good time to reduce your exposure to the stock market?
I wonder what would be a good counter-investment if one thinks AI is in a bubble which is just about to burst.
Maybe consumer staples (Walmart, Pepsi etc.)? Dollar stores?
6 replies →
AI is a massive bubble, nvidia invests in openai which buys nvidia chips, nvidia is just doing round-trip transactions
> more than a fifth of the entire S&P 500 market cap is now just three companies — Nvidia, Microsoft, and Apple — two of which are basically big bets on AI.
These 3 companies have been heavyweights since long before AI. Before AI, you couldn't get Nvidia cards due to crypto, or gaming. Apple is barely investing in AI. Microsoft has been the most important enterprise tech company for my entire lifetime.
Nvidia market cap has increased about 10x since the crypto-shortage years. It wasn't small before, but there's a big difference between ~1% of the market and ~10% of the market in terms of systemic risk.
Also, as of last year about 80% of their revenues were from data center GPUs designed specifically for "AI", and that's undoubtedly continuing to grow as a share of their revenues.
You’re missing the point. Whether one buys it or not to one side, the author is saying those companies, whatever their history have pushed a significant amount of their … chips into a bet on AI.
You cannot really compare Nvidia pre AI profit and market cap. As 'far' back as 2023, Nvidia was ~$15 usd per share.
Microsoft's share price has more than doubled since 2023.
One of my most frustrating things regarding the potential of an AI bubble was some very smart and intelligent researcher being incredibly bullish on AI on Twitter because if you extrapolate graphs measuring AI's ability to complete long-duration tasks (https://metr.org/blog/2025-03-19-measuring-ai-ability-to-com...) or other benchmarks then by 2026 or 2027 then you've basically invented AGI.
I'm going to take his statements at face value and assume that he really does have faith in his own predictions and isn't trying to fleece us.
My gripe with this statement is that this prediction is based on proxies for capability that aren't particularly reliable. To elaborate, the latest frontier models score something like 65% on SWE-bench, but I don't think they're as capable as a human that also scored 65%. That isn't to say that they're incapable, but just that they aren't as capable as an equivalent human. I think there's a very real chance that a model absolutely crushes the SWE-bench benchmark but still isn't quite ready to function as an independent software engineering agent.
So a lot of this bullishness basically hinges on the idea that if you extrapolate some line on a graph into the future, then by next year or the year after all white-collar work can be automated. Terrifying as that is, this all hinges on the idea that these graphs, these benchmarks, are good proxies.
And if they aren't, oh wow.
There's a huge disconnect between what the benchmarks are showing and what the day-to-day experience of those of us using LLMs are experiencing. According to SWE-bench, I should be able to outsource a lot of tasks to LLMs by now. But practically speaking, I can't get them to reliably do even the most basic of tasks. Benchmaxxing is a real phenomenon. Internal private assessments are the most accurate source of information that we have, and those seem to be quite mixed for the most recent models.
How ironic that these LLM's appear to be overfitting to the benchmark scores. Presumably these researchers deal with overfitting every day, but can't recognize it right in front of them
1 reply →
> very smart and intelligent researcher being incredibly bullish on AI on Twitter
A bit offtopic but as time goes by, I believe we can be very intelligent in some aspects and very, very naive and/or wrong in other aspects.
>> by next year or the year after all white-collar work can be automated
Work generates work. If you remove the need for 50% of the work then a significant amount of the remaining work never needs to be done. It just doesn't appear.
The software that is used by people in their jobs will no longer be needed if those people aren't hired to do their jobs. There goes Slack, Teams, GitHub, Zoom, Powerpoint, Excel, whatever... And if the software isn't needed then it doesn't need to be written, by either a person or an AI. So any need for AI Coders shrinks considerably.
You mean Julian Schrittwieser (collaborator on AlphaGo and first author on MuZero)?
https://www.julian.ac/blog/2025/09/27/failing-to-understand-...
Even in the unlikely event AI somehow delivers on its valuations and thereby doesn't disappoint, the implied negative externalities on the population (mass worker redundancy, inequality that makes even our current scenario look rosy, skyrocketing electricity costs) means that America's and the world's future looks like a rocky road.
I think part of the problem is the variance (economically) of AI delivering is so wide, that even that's hard to predict. e.g, is end stage AI:
- Where we have intelligent computers and robots that can take over most jobs
- A smarter LLM that can help with creative work but limited interaction with the physical world
- Something else we haven't imagined yet
Depending on where we end up, the current investment could provide a great ROI or a negative one.
I don't think many businesses are at the stage where they can actually measure whatever AI is delivering.
At one business I know they fired most senior developers and mandated junior developers to use AI. Stakeholders were happy as finally they could see their slides in action. But, at a cost of code base being unreadable and remaining senior employees leaving.
So on paper, everything is better than ever - cheap workers deliver work fast. But I suspect in few months' time it will all collapse.
Most likely they'll be looking to hire for complete rewrite or they'll go under.
In the light of this scenario, AI is false economy.
When standard of living increases significantly, inequality often also increases. The economy is not a zero sum game. Having both rising inequality and rising living standards is generally the thing to aim for.
Both parties seem to agree we should build more electric capacity, that does seem like an excellent thing to invest in, why aren't we?
As the cost of material goods decreases, they will become near free. IMO demand for human-produced goods and experiences will increase.
Yes, if AI proves to be a 10x productivity booster, it probably means most people will be unemployed
Electricity was a 10x productivity boost, just over a way longer timespan. We‘re just speedrunning this.
Also, what happens to those employed when they each have 10 people trying to take their job. It’s a downward spiral for employment as we know it.
The plow was a 10x productivity booster. Guess what happened next?
7 replies →
Not necessarily. If $x is enough to get you 10x more Software engineering effort, people may be willing to increase their spending on software engineering, rather than decrease it
"skyrocketing electricity costs"
You said it right here. No one is going to give up energy at such a cheap rate anymore. Those days are over. Darkness for the US is coming.
Solar is extremely cheap and battery costs are dropping quickly, IMO you may see US neighborhoods, especially rural disconnecting from the grid and rolling their own solutions.
This china rare earth thing may slow down the battery price drop somewhat but not for long because plenty of chemistries don't rely on rare earths, and there will soon be plenty of old EV packs that have some life left in them as part of grid storage.
cost for solar power and storage is decaying exponentially
scarcity isn't real anymore, it is enforced politically for the benefit of the owning class
4 replies →
I personally hope AI doesn't quite deliver on its valuations, so we don't lose tons of jobs, but instead of a market crash, the money will rotate into quantum and crispr technologies (both soon to be trillion dollar+ industries). People who bet big on AI might lose out some but not be wiped out. That's best casing it though.
What would quantum technology actually deliver?
Other than collapsing the internet when every pre-quantum algorithm is broken (nice jobs for the engineers who need to scramble to fix everything, I guess) and even more uncrackable comms for the military. Drug and chemistry discovery could improve a lot?
And to be quite honest, the prospect of a massive biotech revolution is downright scary rather than exciting to me because AI might be able to convince a teenager to shoot up a school now and then, but, say, generally-available protein synthesis capability means nutters could print their own prions.
Better healthcare technology in particular would be nice, but rather like food, the problem is that we already can provide it at a high standard to most people and choose not to.
4 replies →
Quantum had already peaked in the hype. It doesn't scale, like at all. It can't be used for abstract problems. We don't even know the optimal foundation base on which to start developing. It is now in the fusion territory. Fusion is also objectively useful with immense depth or research potential. It's just humans are too dumb for it, for now and so we will do it at scale centuries later.
Crispr would clash with the religious fundamentalists slowly coming back to power in all western countries. Potentially it will be even banned, like abortions.
I like this, because I hate the idea that we should either be rooting for AI to implode and cause a crash, or for it to succeed and cause a crash (or take us into some neo-feudal society).
seriously, LLMs are cool but if this level of investment was happening around crispr, longevity and other health tech I would be 1000X more excited.
"quantum" and "biotech" have been wishful thinking based promises for several years now, much like "artificial intelligence"
we need human development, not some shining new blackbox that will deliver us from all suffering
we need to stop seeking redemption and just work on the very real shortcomings of modern society... we don't even have scarcity anymore but the premise is still being upheld for the benefit of the 300 or so billionaire families...
“Election has consequences.” … Managing a country is a big and critical job. Hard to imagine it was handed over to a bunch of people who barely qualified and hardly care about the future of America.
Seems there was a typo, I think you meant to say "hard to imagine it almost was"?
> In those intervening years, a bunch of AI companies might be unable to pay back their debts.
Dumb question: isn't a lot of the current investment in the form of equity deals and/or funded by existing tech company profit lines? What do we actually know about the debt levels and schedules of the various actors?
Google, Meta, and Microsoft are funding AI out of their existing profits so they will probably be fine. The others may be getting GPUs in exchange for equity but they still have to pay real money for the datacenters, generators, etc. That real money is borrowed and they would default in case of a crash. Potentially hundreds of billions of defaults.
Huh? Surely most of the investment in AI labs is equity investment, not debt (so it can't be defaulted on).
5 replies →
We'll find out
Borrowing money got expensive... the Fed rate is largely responsible for that and there's been a big push to adjust it. As it stands, during and since COVID a lot of people maxed out their credit, or significantly increased their debt for a number of reasons, from home/household needs during the shutdowns to increased cost of living (starting with overpriced groceries).
This has taken an effect. A lot of people are strapped and no longer participating in larger purchases beyond the basic needs as just those have gone up in price so much relatively to income. Initially a lot of it was just greed and taking advantage of the pandemic as an excuse, now people are genuinely stretched thin.
Betting everything on AI is like putting all your eggs in a robot's basket.
What is the quote, “the second phase of bubbles is the financialization”
There is only so much juice you can squeeze out of fuzzy probability word chains. We hit peak diminishing returns on LLM tech already.
> And yet despite those warning signs, there has been nothing even remotely resembling an economic crash yet.
Well... define "economic crash."
The outputs no longer correlate with the inputs. Is it possible it's "crashed" already? And is now running in a faulty state?
This is how I'm starting to view many of these things. It's just that the metrics we use to evaluate the economy are getting out of sync. For instance, if "consumer sentiment is at Great Recession levels", why do we need some other indicator to accept that there's a problem? Isn't that a bad thing on its own?
"Bad" is a judgment call. Trump approval ratings haven't dipped that far, so Congressional Republicans won't dare abandon him and there's not much political will for change.
It might change if we get into millions of foreclosures like the great recession and the pain really hits home. From what I can tell right now they're in wartime mode where they just need to buckle down until Trump wins and makes other countries pay for tariffs or something.
7 replies →
We're definitely not in a crash yet, but it does feel like we're the roller coaster just tipping over the peak: unemployment is rising for the first time in a couple years, there's basically no GDP growth apart from AI investment, and the yield curves look scary. The crash could be any second now, especially because tech earnings week is coming up and that could indicate how much revenue, or lack thereof, the AI investment is bringing in.
So the crash is only official once Wall Street's exuberance matches the economy as perceived by it's workforce? Is that a crash or just a latent arrival of the signal itself?
Yeah; in general it's very difficult to detect the economy going off the rails _in real time_; it tends to be clearly visible only afterwards. It's entirely possible we're already past the point of no return; this cycle's equiv of the Lehman Brothers collapse (if there even is such a clear signal, and there isn't always) could happen tomorrow.
US uniquely is suited to maximally benefit startups emerging in a new space, but maximally prevent startups entering a mature space. No smart, young person in the US matriculates into industries paved over by incumbents as they wisely anticipate that they will be in an industry deliberately hamstrung by regulatory capture.
All growth is in AI now because that's where all the smartest people are going. If AI were regulated significantly, they'd go to other industries and they would be growing faster (though not as much likely)
However there is the broader point that AI is poised offer extreme leverage to those who win the AGI race, justifying capex spending on such absurd margins.
Interesting. Are you advocating for deregulation? What do you think it should look like to encourage the next generation into more industries?
I think there should be regulation that protects individuals and limits the power of incumbents. There are many regulations that ostensibly protect individuals but only exist to empower incumbents.
It feels like the only other big near term tech bet is Meta and their glasses.
It would be pretty fun to see AI fizzle and AR glasses take off. Not a huge zuck/Meta fan but I really do appreciate the actual big bet on something else.
Waymo is a pretty big bet. though it wouldn't be fun if unemployed people were to fall back to being Uber/Lyft drivers, but get outcompeted by AI cars.
True true. Forgot about Waymo!
While the current iteration isn't enough to make me wear it constantly, if there's steady improvements for 5/10 years it could really go somewhere
Bruh they are literally called “AI glasses”. If it succeeds, it’s a success for AI.
There are AI dishwasher, AI washing machine you can buy today. Is it a success for AI?
The question is this: At what point will the market and economy stop looking forward at what AI promises to do and start looking backwards at what it has done? We’ve had this technology for three years and what it has done for us amounts to very little more than pollute every form of communication with low value, mass produced drivel and destroyed the ability of teachers to evaluate their students. We have no new medicine, no new math, no new physics in any meaningful way. Yet all eyes are still on the carrot that we’ll never reach: AGI. When will we realize that even the name itself, artificial intelligence, is a lie? It’s just a database of most human knowledge with a very intuitive human language interface, but knowledge and intelligence are not the same thing. At some point, the world will be forced to acknowledge what little it has received in return for its misplaced faith.
Don't use P/E as an estimate of a bubble... profits often mean revert.
Use something closer to market value to GDP (with some adjustments). That is a much better estimate. John Hussman has imo the best of such metrics. Here's his thoughts from August: https://www.hussmanfunds.com/comment/mc250814/
And this image says a lot: https://www.hussmanfunds.com/wp-content/uploads/comment/mc25...
Yes. A lot now depends on when "growth" stops. But GDP is very hard to grow at an sustained high rate justifying higher valuations. Even through industrial revolutions, assembly lines, transistors, and the internet.
At the end of the day, if you look at almost any government, roughly 2/3 of expenses go towards healthcare and education things which, AI worlkflow are very likely continue offsetting a larger and larger percentage of the costs on.
Can we still have a financial crisis from all this investment going bust because it might take too long for it to make a difference in manufacturing enough automation hardware for everyone? Yes.
But, the fundamentals are still there, parents will still send their kids to some type of school, and people will trade good in exchange for health services. That's not going to change. Neither will the need to use robots in nursing homes, I think that assumption is safe to make.
What's difficult to predict change in is adoption in manufacturing, and repairs ( be that repairing bridges or repairing your espresso machine ) because that is more of a "3D" issue and hard to automate reliably (think about how many gpus today would it actually take to get a robot to reason out and repair a whole in your drywall), given that your RL environments and training data needs grow exponentially. Technically, your phone should have enough gpu performance to do your taxes with a 3B model and a bunch of tools, eventually it'll even be better than you at it. But to tun an actual robot with multiple cameras and stuff doing troubleshooting and decision making.... you're gonna need a whole 8x rack of gpus for that.
And that's what makes it now difficult to predict what's going to happen. The areas under the curve can vary widely. We could get a 1B AGI model in 6 months, or it could take 5 years for agentic workflows to fully automate everyones taxes and actually replace 2/3 of radiology work...
Either way, while theres a significant chance of this transition to the automation age being rough, I am overall quite optimistic given the fundamentals of what governments actually spend majority of their money on.
For the vast majority of US taxpayers, automating their taxes is feasible right now and the obstacles are political not technical.
I wouldn't even call it political. It's financial, and should be criminal. The people who are elected to represent us are just taking bribes and being paid off to allow corporations to screw us over.
4 replies →
The fundamentals are not there.
Talk to an educator. Education is being actively harmed by AI. Kids don’t want to do any difficult thinking work so they aren’t learning. (Literally any teacher you talk to will confirm this)
AI in medicine is challenging because AI is bad at systems thinking, citation of fact and data privacy. Three things that are absolutely essential for medicine. Also everything for healthcare needs regulatory approval so costs go up and flexibility goes down. We’re ten years away from any AI for medicine being cost effective.
Having an AI do your taxes is absurd. They regularly hallucinate. I 100% guarantee that if you do your taxes with AI you won’t pass an audit. AI literally can’t count. You’re be better off asking it to vibecode a replacement for TurboTax. But again the product won’t be AI it will be traditional code.
Trying for AGI down the road of an LLM is insanity sauce. It’s a simulated language center that can’t count, it can’t do systems thinking. It can’t cite known facts. We’re not six months away we’re a decade or a “cost effective fusion” distance (defined as perpetually 20 years in the future from any point in time)
There are at least six Silicon Valley startups working on AGI. Not a single one of them has published an architecture strategy that might work. None of the “almost AGI” products that have ever come out have a path to AGI.
Meh is the most likely outcome. I say this as someone who uses it a lot for things it is good at.
> AI in medicine is challenging because AI is bad at systems thinking, citation of fact and data privacy.
main question is if humans are better than that. I have experiences with doctor: he gave prescription of Xmg, I am asking why, he said because some study said so, I go home, pull study, and it is XXmg there. Doctors can make things up all the time without much consequences and likely do. For AI, corps and community can do all kind of benchmarking and evaluation on industrial scale.
This is incorrect actually. Largest spending is usually welfare and health, education is pretty small.
If you include local governments, then the education spending percentage gets higher, but still nothing close to healthcare.
I think if there's a rational reasoning behind Trump unleashing ICE and the national guard on the domestic population, this must be it: "the economy is doing really bad, and we need a smokescreen so people won't talk about it."
Hmmm kinda ties into the whole problem of well-off/happy people not being particularly eager to chant "foreigners out", but when they're desperate they take any explanation for their misery they can get their hands on that sounds workable (because, no, you can't go up to a billionaire and just "take all their stuff", but you CAN beat up a foreigner or other disadvantaged person that is worse off than you)
I think another reason for the recent global rise of anti-immigration parties is also that the relative economic value of immigrants (as unskilled labor) has gone down, and the "costs" (cultural/language friction) have become more visible.
2 replies →
This is hard-paywalled.
https://archive.ph/fOHhx
This is still paywalled.
Huh, got in fine on my phone, which has the weaker paywall workaround.
Read to the point where it says subscribe to see the rest.
1 reply →
that's in any case not good, if so much depends on AI in peoples minds, what about other sectors ..
Reminder: If you're going to feel doomer about how tech capex represents like nn% of US GDP growth, you should do some research into what percentage of US GDP growth, especially since 2020, has been the result of government printing. Arguably, our GDP growth right now is more "real" than the historical GDP growth numbers between 2020-2023, but all of it is so warped by policy that its hard to tell what's going on.
We're in extremely unprecedented times. Sometimes maybe good, sometimes maybe shit. The old rules don't apply. Etc.
separate from this article, I don't have a very high opinion of the author. he has an astonishing record of being uninformed and/or just plain wrong in everything I've ever heard him write about.
but as far as this article, the "tech capex as a percentage of GDP growth" is an incredible cherrypicking of statistics to create a narrative... when tech became a boodbath starting in 2022, the rest of the economy continued on strong. all the way until 2025, the rest of the economy was booming while tech layoffs and budget cuts after covid were exploding. so starting that chart in early 2023 when tech had bottomed out (compared to the rest of the economy) is misleading. tech capex as a percentage of the overall GDP has been consistently rising since 2010 - https://gqg.com/highchartal-paper-large-tech-capex-spend-as-... this is obviously related to the advent of public cloud computing more than anything. why this chart appears to clash with the author's chart is the author's chart specifically calls out just percentage of GDP growth, not overall GDP. so the natural conclusion is that while tech has been in borderline recessionary conditions since 2022, it is now becoming stable (if not recovering) while the rest of the economy that didn't have the post-covid pullback (nor the same boom during covid, of course) is now having issues largely due to geopolitics and global trade.
is there an AI bubble? who cares. it's not as meaningful to the broader economy as these cherrypicking stats imply. if it's a bubble, it represents maybe .3% of the GDP. no one would be screaming from the mountain tops about a shaky economy and a bubble if that same .3% was represented by a bubble in the restaurant industry or manufacturing. in fact, in recent years, those industries DID have inflationary bubbles and it was treated like a positive thing for the most part.
I think a lot of this overanalysis and prodding for flaws in tech is generally an attempt at schadenfreude hoping that tech becomes just another industry like carpentry or plumbing. in particular, hoping for a scenario where tech is not as culturally impactful as it is today. because people are worried and frustrated about the economy, don't understand the value of tech, and hope it stops sucking up so much funding and focus by society in general.
they're not 100% wrong in being untrusting or skeptical of tech. the tech industry hasn't always been the best steward of the wealth and power it possesses. but they are generally wrong about valuations or impact of tech on the economy. like the people spending all this money are clueless. the stock market fell 900 points on friday, wiping out over $1 trillion in value over the course of a couple hours. yet the hundreds of billions invested in datacenters is a sign of impending economic doom.
is the economy good? I don't think it's doing great. but it has little to do with AI one way or another. "AI" is just another trend of making technology more accessible to the masses. no more scarier, complicated, or impactful than microcomputers, DSL, cellular phones, or youtube. and while the economy crashed in 2008, youtube and facebook did well. yet there was none of this dooming about tech specifically at the time simply because the tech industry wasn't as controversial at the time.
He is a partisan hack. During the election last year he consistently posted that gdp growth was real, the economy was booming, and it was all thanks to Biden/Harris. I called him out on it on Twitter, and he was unabashed about being a partisan propagandist. Not surprisingly, now that the politics have changed, the history changes.
Anything he says on any topic should be treated as suspect, and probably best ignored.
The person you're replying to also acknowledged GDP was growing despite the tech layoffs. Is your assertion that GDP wasn't growing in 2024? If so, I'd love to see any evidence.
3 replies →
There's a lot of people who can only process their own failures by assuming that everyone and everything must also, eventually fail; that anything successful is temporary and "not real". And there's a lot of down people in the tech industry right now; we're in a recession, after all.
There's also a significant number of people (e.g. Doctorow) who have made their entire brand on doomerism; and whether they actually believe what they say or have just become addicted to the views is an irrelevant implementation detail.
The anti-AI slop that dominates HackerNews doesn't serve anything productive or interesting. Its just an excuse for people to not read the article, go straight to the comments, and parrot the same thing they've parroted twenty times.
> The anti-AI slop that dominates HackerNews doesn't serve anything productive or interesting.
To you. I find the debate quite valuable, as there is a wide open future and we're in the midst of figuring out where "here" is.
2 replies →
You are way too nice with the author, if I were you I’d omit the fake empathy which dilutes your substantial points. The author is hallucinating worse than AI.
So what if other people downvote you for being too critical.
ha I honestly don't have that strong of an opinion of the author because the few tidbits I've seen I didn't even read all the way because the info on the surface was so flawed. this was the first article I've actually read. so I can't say they're malicious or hallucinating because I haven't looked into why they have the opinions they do. but I'm definitely not inclined to trust them, which was why I had to say that I've recognized the pattern of "Noah Smith" (I don't know who they are, where they work, nothing) seems to just ship out their own copy/paste of whatever trendy (and flawed) opinion is hot at the moment
You cannot eat output tokens, house or clothe yourself inside them, nor burn them to keep warm or generate locomotion.
Present language model AI is utterly incapable of being a prime economic driver.
Just repeating all the same links that are already being discussed around here for weeks.
How the AI Bubble Will Pop
https://news.ycombinator.com/item?id=45464429
etc
etc
I don't know about you, but it seems to me "AI" is already slightly disappointing to put it mildly.
[dead]
[flagged]
[flagged]
[flagged]
> which cuts down on the risk of a trump 2028 run
Ah, so I see we've entered the "normalizing the end of presidental term limits" part of the downward spiral. Maybe I need to accelerate my plans to get the fuck out of here.
We're already past the point where there is no meaningful notion of "normal" that actually impacts what happens in government. Normalizing things doesn't matter that much if people care so little that they elect someone who's done what Trump did his first time.
I mean he's selling the hats and I've seen some talking heads on the news say they'll look at ways for him to do it. The two term limit is a kinda recent precedent all things considered, so...
7 replies →
[flagged]
In any other decade, I'd scoff at the idea that widespread economic problems would be a net-benefit in averting something worse for the country.
... I miss those years.
boomers have already agreed multiple times this century that businesses are not allowed to go bankrupt in fear that their retirement portfolios may not be juiced to the gills. So instead we bail everyone out on the taxpayers dime and leave the debt for some poor schmuck in the future to figure out.
Trump cannot run again.
It (was) also settled precedent that he can't stop spending money required to be spent by Congress (settled during Nixon's term), but the supremes decided it's different now. Same for firing heads of supposed independent federal departments, which was supposed to prevent presidential manipulation.
And the s.c. created presidential immunity out of nothing. For now the president has unchecked power, the conservative dream of a unitary executive.
This will all end when a Democrat is in power again. This is not a sarcastic exaggeration, one way they teed this up was shadow docket decisions like the Kavanaugh rule (ice can arrest/kidnap you based on appearance), it's not a precedent as shadow docket so they can reverse it any time.
Yet.
Trump can do whatever nobody will stop him from doing. Who's going to stop him from running again?
He can if SCOTUS says he can
Why not? Because the constitution says so?
Has that previously been much of an impediment to Trump and other fascists throughout history?
In the normative sense of "another atrocity like this cannot occur", then yes.
However your comment instead sounds like you are dismissing it as a non-concern... in which case I suggest you wake the heck up. We've had months now of seeing President and his cabinet actively and willfully breaking federal and Constitutional law, with the entire Republican legislature complicit.
It wouldn't even the first time states tried to remove him from their ballots either. [0]
[0] https://www.scotusblog.com/2024/03/supreme-court-rules-state...
Isn't Trump a much bigger problem?
I can’t help but think a lot of these comments are actually written by AI — and that, in itself, showcases the value of AI. The fact that all of these comments could realistically have been written by AI with what’s available today is mind-blowing.
I use AI on a day-to-day basis, and by my best estimates, I’m doing the work of three to four people as a result of AI — not because I necessarily write code faster, but because I cover more breadth (front end, back end, DevOps, security) and make better engineering decisions with a smaller team. I think the true value of AI, at least in the immediate future, lies in helping us solve common problems faster. Though it’s not yet independently doing much, the most relevant expression I can think of is: “Those who cannot do, teach.” And AI is definitely good at relaying existing knowledge.
What exactly is the utility of AI writing comments that seem indistinguishable from people? What is the economic value of a comment or an article?
At present rate, there is a good argument to be made that the economic value is teetering towards negative
A comment on a post or an article on the internet has value ONLY if there are real people at the other end of the screen reading it and getting influenced by it
But if you flood the internet with AI slop comments and articles, can you be 100% sure that all the current users of your app will stick around?
If there are no people to read your articles, your article has zero economic value
Perhaps economic value can come from a more educated and skilled workforce if they're using AI for private tuition (if it can write as well as us, it can provide a bespoke syllabus, feedback etc.)
Automation over teaching sounds terrible in the long run, but I could see why learning languages and skills could improve productivity. The "issue" might be here that there's more to gain in developing nations with poor education standards, and so while capital concentrates more to the US because they own the tech, geographical differences in labour productivity reduces.
What is the economic value of a wheel? If we flood the market with wheels, we’re going to need far fewer sleds and horses. Pretty soon, no one might need horses at all — can you imagine that?
4 replies →
hell, it has negative economic value because of the opportunity costs of the electricity and water used to produce it.
That first sentence is a tautology. The second to last sentence is one of those things it’s ok to think until you learn better, but don’t say that in polite company.
Did AI write all these comments? AI is turning me into a conspiracy theorist? I keep seeing AI is like having a team of 3-4 people, or doing the work of 3-4 people type posts everywhere lately like it's some kind of meme. I don't even know what it means. I don't think you're saying you have 4x'd your productivity? But maybe you are?
Best I can tell, it’s resulting in less churn, which isn’t the same as work getting done faster. Maybe it’s a phenomenon unique to engineering, but what I’m observing isn’t necessarily work getting done faster — it’s that a smaller number of people are able to manage a much larger footprint because AI tools have gotten really good at relaying existing knowledge.
Little things that historically would get me stuck as I switch between database work, front-end, and infrastructure are no longer impeding me, because the AI tools are so good at conveying the existing knowledge of each discipline. So now, with a flat org, things just get done — there’s no need for sprint masters, knowledge-sharing sessions, or waiting on PR reviews. More people means more coordination, which ultimately takes time. In some situations that’s unavoidable, but in software engineering, most of the patterns, tools, and practices are well established; it’s just a matter of using them effectively without making your head explode.
I think this relay of knowledge is especially evident when I can’t tell an AI comment from a human one in a technical discussion — a kind of modern Turing Test, or Imitation Game.
3 replies →