Comment by BirAdam
2 hours ago
I've been conflicted on AI/ML efforts for years. On one hand, the output of locally run inference is astounding. There are plenty of models on HuggingFace that I can run on my Mac Studio and provide real value to me every single work day. On the other hand, while I have the experience to evaluate the output, some of my younger colleagues do not. They are learning, and when I have time to help them, I certainly do, but I wish they just didn't have access to LLMs. LLMs are miracle tools in the right hands. They are dangerous conveniences in the wrong hands.
Wasted money is a totally different topic. If we view LLMs as a business opportunity, they haven't yet paid off. To imply, however, that a massive investment in GPUs is a waste seems flawed. GPUs are massively parallel compute. Were the AI market to collapse, we can imagine these GPUs being sold a severe discounts which would then likely spur some other technological innovation just as the crypto market laid the groundwork for ML/AI. When a resource gets cheap, more people gain access to it and innovation occurs. Things that were previously cost prohibitive become affordable.
So, whether or not we humans achieve AGI or make tons of money off of LLMs is somewhat irrelevant. The investment is creating goods of actual value even if those goods are currently overpriced, and should the currently intended use prove to be poor, a better and more lucrative use will be found in the event of an AI market crash.
Personally, I hope that the AGI effort is successful, and that we can all have a robot house keeper for $30k. I'd gladly trade one of the cars in my household to never do dishes, laundry, lawnmowing, or household repairs again just as I paid a few hundred to never have to vacuum my floors (though I actually still do once a month when I move furniture to vacuum places the Roomba can't go, a humanoid robot could do that for me).
What's the lifecycle length of GPUs? 2-4 years? By the time OpenAIs and Anthropics pivot, many GPUs will be beyond their half-life. I doubt there would be many takers for that infrastructure.
Especially given the humungous scale of infrastructure that the current approach requires. Is there another line of technology that would require remotely as much?
Note, I'm not saying there can't be. It's just that I don't think there are obvious shots at that target.
> On one hand, the output of locally run inference is astounding. There are plenty of models on HuggingFace that I can run on my Mac Studio and provide real value to me every single work day. On the other hand, while I have the experience to evaluate the output, some of my younger colleagues do not. They are learning, and when I have time to help them, I certainly do, but I wish they just didn't have access to LLMs. LLMs are miracle tools in the right hands. They are dangerous conveniences in the wrong hands.
Is weird to me. Surely you recognise just as they don't know what they don't know (which is presumably the problem when it hallucinates), you must also have the same issue, there's just no old greybeard to wish you didn't have access.
Well, I'm the graybeard (literally and metaphorically). I know enough not to blindly trust the LLM, and I know enough to test everything whether written by human or machine. This is not always true of younger professionals.
There's a big difference between:
"creating goods of actual value"
and
"creating goods of actual value for any price"
I don't think it's controversial that these things are valuable but rather the cost to produce use things is up for discussion, and the real problem here. If the price is too high now, then there will be real losses people experience down the line, and real losses have real consequences.
I don't think so about the gpus. It's a sunk cost that won't be repurposed easily--just look at what happened to Nortel. Did all those PBXs get repurposed? Nope--trash. Those data centers are going to eat it hard, that's my prediction. It's not a terrible thing, per se--"we" printed trillions the past few years and those events need a sink to get rid of all the excess liquidity. It's usually a big war, but not always. Last time it was a housing bubble. Everyone was going to get rich on real estate, but not really. It was just an exercise in finding bag holders. That's what this AI/data center situation amounts to as well--companies had billions in cash sitting around doing nothing, might as well spend it. Berkshire has the same problem--hundreds of billions with nowhere to be productively invested. It doesn't sound like a problem but it is.
My humble take on AGI is that we don't understand consciousness so how could we build something conscious except by accident? It seems like an extremely risky and foolish thing to attempt. Luckily, humans will fail at it.