Comment by arexxbifs
5 hours ago
It's not that I disagree with the basic premise and concern of the text, but I'm not convinced about the "RAM shortage will lead to thin clients" argument, because the thin client is going to be a browser.
Everything today is a web app. If it doesn't exist and you want to vibe code it? It's probably going to become a web app, vibed using a web app.
The problem is, web apps are stupendous memory hogs. We're even seeing Chromebooks with 8 gigs of RAM now. LLM:s are all trained for and implemented in apps assuming the user can have $infinity browsers running, whether it's on their PC or on their phone. It's going to be very hard to change that in a way that's beneficial to what passes for business models at AI companies.
Ah, the paradoxes of modern software.
Yeah, my work laptop is essentially a thin client, as everything is done via browser.
Even remote VDI instances are accessed through a web page now.
On top of that add all the corporate bloatware and securityslop-ware, and suddenly my "thin" client is using 60% of 10 available cores and 85% of 16GB or RAM.
I don't think it needs an explanation on how insane that resource usage is.