Comment by khafra
5 months ago
Yudkowsky just mentioned that even if LLM progress stopped right here, right now, there are enough fundamental economic changes to provide us a really weird decade. Even with no moat, if the labs are in any way placed to capture a little of the value they've created, they could make high multiples of their investors' money.
Like what economic changes? You can make a case people are 10% more productive in very specific fields (programming, perhaps consultancy etc). That's not really an earthquake, the internet/web was probably way more significant.
LLMs are fundamentally a new paradigm, it just isn't distributed yet.
It's not like the web suddenly was just there, it came slow at first, then everywhere at once, the money came even later.
The LLMs are quite widely distributed already, they're just not that impactful. My wife is an accountant at a big 4 and they're all using them (everyone on Microsoft Office is probably using them, which is a lot of people). It's just not the earth shattering tech change CEOS make it to be , at least not yet. We need order of mangitude improvements in things like reliability, factuality and memory for the real economic efficiencies to come and its unclear to me when that's gonna happen.
9 replies →
Government and healthcare workers have been using AI for notes for over a year in Louisiana; an additional anecdote to sibling.
It's a force multiplier.
Think of having a secretary, or ten. These secretaries are not as good as an average human at most tasks, but they're good enough for tasks that are easy to double check. You can give them an immense amount of drudgery that would burn out a human.
What drudgery, though? Secretaries don't do a lot of drudgery. And a good one will see tasks that need doing that you didn't specify.
If you're generating immense amounts of really basic make work, that seems like you're managing your time poorly.
1 reply →
Very limited thinking AI is a tool
It's an echo chamber.
It is - what? - a fifth anniversary of "the world will be a completely different place in 6 months due to AI advancement"?
"Sam Altman believes AI will change the world" - of course he does, what else is he supposed to say?
It is a different place. You just haven't noticed yet.
At some point fairly recently, we passed the point at which things that took longer than anyone thought they would take are happening faster than anyone thought they would happen.
Yep totally agree. It will also depend who captures the most eyeballs.
ChatGPT is already my default first place to check something, where it was Google for the previous 20+ years.
I use it for all kinds of unique things, but ChatGPT is the last place I look for facts.
Eyeballs aren’t enough though. Unlike Google ChatGPT is very expensive to run. It’s unlikely they can just slap ads on it like Google did.
Inference costs will keep dropping. The stuff the average consumer does will be trivially cheap. More stuff will move on device. The edge capabilities of these models are already far beyond what the average person can use or comprehend.
The point I wonder about is the sustainability of every query being 30+ requests. Site owners aren't ready to have 98% of their requests be non-monetizable bot traffic. However, sites that have something to sell are..
With no moat, they aren't placed to capture much value; moats are what stops market competition from driving prices to the zero economic profit level, and that's even without further competition from free products that are being produced by people who aren’t even trying to support themselves in the market you are selling into, which can make even the zero economic profit price untenable.
Market competition doesn't work in an instant; even without a moat, there's plenty of money they can capture before it evaporates.
Think pouring water from the faucet into a sink with open drain - if you have high enough flow rate, you can fill the sink faster than it drains. Then, when you turn the faucet off, as the sink is draining, you can still collect plenty of water from it with a cup or a bucket, before the sink fully drains.
The startups that are using API credits seem like the most likely to be able to achieve a good return on capital. There is a pretty clear cost structure and it's much more straightforward whether you are making money or not.
The infrastructure side of things, tens of billions and probably hundreds of billions going in, may not be fantastic for investors. The return on capital should approach cost of capital if someone does their job correctly. Add in government investment and subsidies (in China, the EU, the United States) and it become extremely difficult to make those calculations. In the long term, I don't think the AI infrastructure will be overbuilt (datacenters, fabs), but like the telecom bubble, it is easy to end up in a position where there is a lot of excess capacity and the way you made your bet means getting wiped out.
Of course if you aren't the investor and it isn't your capital, then there is a tremendous amount of money to be made because you have nothing to lose. I've been around a long time, and this is the closest thing I've felt to that inflection point where the web took off.
> Market competition doesn't work in an instant; even without a moat, there's plenty of money they can capture before it evaporates.
Sure, in a hypothetical market where before they try to extract profits most participants aren't losing money with below-profitable prices trying to keep mindshare. But you’d need a breakthrough around which a participant had some kind lf a moat to get, even temporarily, there in the LLM market.
Oh really? How are these changes supposed to look like? Who will pay up essentially? I don't really see it, aside from the m$ business case of offering AI as a guise for violating privacy much harsher to better sell ads.