Comment by j45
2 months ago
The technology of LLMs is already applicable to valuable enough problems, therefore it won’t be a bubble.
The world might be using a standard of AI needing to be a world beater to succeed but it’s simply not the case, AI a is software, and it can solve problems other software can’t.
> The technology of LLMs is already applicable to valuable enough problems, therefore it won’t be a bubble.
Dot-com was a bubble despite being applicable to valuable problems. So were railways when the US had a bubble on those.
Bubbles don't just mean tulips.
What we've got right now, I'm saying the money will run out and not all the current players will win any money from all their spending. It's even possible that *none* of the current players win, even when everyone uses it all the time, precisely due to the scenario you replied to:
Runs on a local device, no way to extract profit to repay the cost of training.
> repay the cost of training
Key point. Once people realize that no money can be made from LLMs, they will stop training new ones. Eventually the old ones will become hopelessly out-of-date, and LLMs will fade into history.
Ai is much more developed at its entrance to the economy than most anything during the dot com boom.
Dot com is not super comparable to AI.
Dot com had very few users on the internet compared to today.
Dot com did not have ubiquitous e-commerce. The small group of users didn’t spend online.
Search engines didn’t have the amount of information online that there is today.
Dot com did not have usable high speed mobile data, or broadband available for the masses.
Dot com did not have social media to share and alas how things can work as quickly.
LLMs were largely applicable to industry when gpt 4 came out. We didn’t have the new terms of reference for non deterministic software.
None of that matters to this point, though I'd dispute some of it if I thought it did.
"Can they keep charging money for it?", that's the question that matters here.
3 replies →
The big models will never run locally, and I doubt that titan and co will run locally, just way to much resources needed
"never" vs https://en.wikipedia.org/wiki/Koomey%27s_law
Observably, the biggest models we have right now have similar complexity to a rodent's brain, which runs on far less power. Limiting factors for chips in your phone is power, power efficiency is improving rapidly.