← Back to context

Comment by credit_guy

16 hours ago

AI does not have that potential. Not now, maybe in some distant future, as in decades from now.

Right now, AI has the potential to increase productivity. Work that's done by 10 people without AI, can be done by 5 people with AI. So 5 people can be laid off. But soon, companies will realize that rather than laying off half of their workforce, they can simply produce twice as much as before.

In reality, the AI productivity boost is nothing close to 100%. Maybe 10% per year. That could translate in 5% layoffs, 5% increased output. Nothing extraordinary compared to other times in history.

Even following your model, in twenty years we end up with 0.95*20=0.36, that's a whopping 64% of the population not-needed. People will do their best to adapt, but the article rightly points out that even when blue-collar jobs were off-shored and there were plenty of white-collar jobs, most people couldn't adjust in a decade or two, and that it still resulted in significant social strain.

Now imagine most white-collar jobs gone too, our cracked democratic institutions finally crumbling, and survtech and repressive police everywhere to squash protesters and dissidents... I've coined a term for a way to get out of that dystopia, if we manage: The Grim Revolts[^1]. I hope it remains the fiction of my feverish mind that it's supposed to be, and that we do manage to find good ways to prevent things from spiraling.

[^1]: https://w.ouzu.im/

> Right now, AI has the potential to increase productivity.

The detail many seem to forget is that "AI" is Artificially Inexpensive. It must eventually turn a profit, or collapse. Once providers start charging what they must to remain solvent, and/or the output is burdened with advertising, the gains, real or perceived, will likely evaporate.

  • I disagree. AI is very cheap. People speculate that OpenAI and Anthropic and Google heavily subsidize the AI they provide. But all the evidence points towards this not being true. You can look at all the independent providers, like Cerebras, Groq, TogetherAI, and dozens others. Some may swim in venture capital money, and can afford to subsidize, but I doubt all can do that. And if they can't, then how do you explain that the cost of the million tokens is so low?

    And this is just now. Inference costs are plummeting, because models are becoming more and more efficient. I can get 6 tokens/second on my local Ollama from GPT-OSS-20B using only CPU, and I can get 11 tps from Qwen3-30B. This was unthinkable 6 months ago. I am quite certain I'll get faster speeds 6 months from now and faster still 6 months later. Model architectures are becoming better and better, and models with the same number of parameters are becoming smarter and smarter.