Comment by addisonj
10 hours ago
I will repeat my comment from 70 days ago:
> I was discussing with a friend that my biggest concern with AI right now is not that it isn't capable of doing things... but that we switched from research/academic mode to full value extraction so fast that we are way out over our skis in terms of what is being promised, which, in the realm of exciting new field of academic research is pretty low-stakes all things considered... to being terrifying when we bet policy and economics on it.
That isn't overly prescient or anything... it feels like the alarm bells started a while ago... but wow the absolute "all in" of the bet is really starting to feel like there is no backup. With the cessation of EVs tax credits, the slowdown in infra spending, healthcare subsidies, etc, the portfolio of investment feels much less diverse...
Especially compared to China, which has bets in so many verticals, battery tech, EVs, solar, then of course all the AI/chips/fabs. That isn't to say I don't think there are huge risks for China... but geez does it feel like the setup for a big shift in economic power especially with change in US foreign policy.
I'll offer two counter-points. Weak but worth mentioning. wrt China there's no value to extract by on-shoring manufacturing -- many verticals are simply uninvestable in the US because of labor costs and the gap of cost to manufacture is so large it's not even worth considering. I think there's a level of introspection the US needs to contend with, but that ship has sailed. We should be forward looking in what we can do outside of manufacturing.
For AI, the pivot to profitability was indeed quick, but I don't think it's as bad as you may think. We're building the software infrastructure to accomodate LLMs into our work streams which makes everyone more efficient and productive. As foundational models progress, the infrastructure will reap the benefits a-la moore's law.
I acknowledge that this is a bullish thesis but I'll tell you why I'm bullish: I'm basically a high-tech ludite -- the last piece of technology I adopted was google in 1996. I converted from vim to vscode + copilot (and now cursor.) because of LLMs -- that's how transformative this technology is.
I think an interesting way to measure the value is to argue "what would we do without it?"
If we removed "modern search" (Google) and had to go back to say 1995-era AltaVista search performance, we'd probably see major productivity drops across huge parts of the economy, and significant business failures.
If we removed the LLMs, developers would go back to Less Spicy Autocomplete and it might take a few hours longer to deliver some projects. Trolls might have to hand-photoshop Joe Biden's face onto an opossum's body like their forefathers did. But the world would keep spinning.
It's not just that we've had 20 years more to grow accustomed to Google than LLMs, it's that having a low-confidence answer or an excessively florid summary of a document are not really that useful.
Chatting with Claude about a topic is in another universe to google search.
I default to Claude for almost everything where I want to know something. I don’t trust Google’s results because of how weighted they are to SEO. Being good at SEO is a separate skill set.
The answers are not low confidence, cite sources, and can do things that Google cannot. For example: I used Claude to design a syllabus to learn about a technical domain along with quizzes and test suites for verification. It linked to video series, books, and articles grouped by an increasingly complex knowledge set.
Is this really true re: "modern search"? Genuine question because this is probably outside of my domain. I'm just trying to think of industries that would critically affected it we went from modern search to e.g. AltaVista/Yahoo/DogPile and kind of coming up empty except in that it might be more difficult for companies that have perfected modern SEO/advertising to maintain the same level of reach, but I don't think that's what you're alluding to?
you can say something similar about google search about 5 years after release too
I think there's a bubble around AI, but I don't think I agree with this argument. Google search launched in 1998, and ChatGPT launched in 2022.
In 2001, if Google had gone under like a lot of .com bubble companies, I think the economic impact visible to people of the time would have been marginal. There was no Google News, Gmail, Android, and the alternatives (AltaVista, Ask Jeeves, MSN Search) would have been enough. Google was a forcing function for the others to compete with the new paradigm or die trying. It wasn't itself an economic behemoth the way it is today.
I think if OpenAI folded today, you'd still have several companies in the generative AI space. To me, OpenAI's reminiscent of Google in the late 90s in its impact, although culturally it's very different. It's a general purpose website anyone with an internet connection can visit, deep industry competitors are having to adapt to its model to stay alive, and we're seeing signs of a frothy tech bubble a few years after its founding. People across industry verticals, government, law, and NGOs are using it, and students are learning with it.
One counterpoint to this would be that companies like Google reacted to the rise of social media with stuff like Google+, but to me the level to which "AI" is baked into every product at Google exceeds that play by a great margin. At most I remember a "post to plus" link at the top of GMail and a few hooks within the contact/email management views. In contrast, they are injecting AI results into almost every search I make and across almost every product of theirs I use today.
If you fast forward 20 years, I would be surprised if companies specializing in LLMs were not major players the way today's tech giants are. Some of the companies might have the same names, but they'll have changed.
> We should be forward looking in what we can do outside of manufacturing.
For example?
Another thing to note about China: while people love pointing to their public transit as an example of a country that's done so much right, their (over)investment in this domain has led to a concerning explosion of local government debt obligations which isn't usually well-represented in their overall debt to GDP ratios many people quote. I only state that to state that things are not all the propaganda suggests it might be in China. The big question everyone is asking is, what happens after Xi. Even the most educated experts on the matter do not have an answer.
I, too, don't understand the OP's point of quickly pivoting to value extraction. Every technology we've ever invented was immediately followed by capitalists asking "how can I use this to make more money". LLMs are an extremely valuable technology. I'm not going to sit here and pretend that anyone can correctly guess exactly how much we should be investing into this right now in order to properly price how much value they'll be generating in five years. Except, its so critical to point out that the "data center capex" numbers everyone keeps quoting are, in a very real (and, sure, potentially scary) sense, quadruple-counting the same hundred-billion dollars. We're not actually spending $400B on new data centers; Oracle is spending $nnB on Nvidia, who is spending $nnB to invest in OpenAI, who is spending $nnB to invest in AMD, who Coreweave will also be spending $nnB with, who Nvidia has an $nnB investment in... and so forth. There's a ton of duplicate-accounting going on when people report these numbers.
It doesn't grab the same headlines, but I'm very strongly of the opinion that there will be more market corrections in the next 24 months, overall stock market growth will be pretty flat, and by the end of 2027 people will still be opining on whether OpenAI's $400B annual revenue justifies a trillion dollars in capex on new graphics cards. There's no catastrophic bubble burst. AGI is still only a few years away. But AI eats the world none-the-less.
[1] https://www.sciencedirect.com/science/article/abs/pii/S09275...
My point is not that value extraction wouldn't happen, my point is simply that in addition to the value extraction we also made other huge shifts in economic policy that taken together really seem to put us on a path towards an "AGI or bust" situation in the future.
Is that a bit hyperbolic? isn't this just the same as dotcom and housing bubbles before where we pivoted a bit too hard into a specific industry? maybe... but I also am not sure it would be wise to assume past results will indicate future returns with this one.
2 replies →
> but geez does it feel like the setup for a big shift in economic power
It happened ten years ago, it's just that perceptions haven't changed yet.