Comment by pickleRick243
4 hours ago
I always find these "anti-AI" AI believer takes fascinating. If true AGI (which you are describing) comes to pass, there will certainly be massive societal consequences, and I'm not saying there won't be any dangers. But the economics in the resulting post-scarcity regime will be so far removed from our current world that I doubt any of this economic analysis will be even close to the mark.
I think the disconnect is that you are imagining a world where somehow LLMs are able to one-shot web businesses, but robotics and real-world tech is left untouched. Once LLMs can publish in top math/physics journals with little human assistance, it's a small step to dominating NeurIPS and getting us out of our mini-winter in robotics/RL. We're going to have Skynet or Star Trek, not the current weird situation where poor people can't afford healthy food, but can afford a smartphone.
> We're going to have Skynet or Star Trek
Star Trek only got a good society after an awful war, so neither of these options are good.
Star Trek only got a good society after discovering FTL and existence of all manner of alien societies. And even after that Star Treks story motivations on why we turned good sound quite implausible given what we know about human nature and history. No effing way it will ever happen even if we discover aliens. Its just a wishful fever dream.
I'm definitely not a Star Trek connoisseur but I thought a big part of the lore is the "never again"-ish response to the wars through WW3?
But anyway, I share your lack of optimism.
1 reply →
It isn't even just the aliens (although my headcanon is that the human belief that they "evolved beyond their base instincts" is part a trauma response to first contact and World War 3, and part Vulcan propaganda/psyop.) Star Trek's post scarcity society depends on replicators and transporters and free energy all of which defy the laws of physics in our universe (on top of FTL.)
We'll never have Star Trek. We'll also never have SkyNet, because SkyNet was too rational. It seems obvious that any AGI that emerges from LLMs - assuming that's possible - will not behave according to the old "cold and logical machine" template of AI common in sci-fi media. Whatever the future holds will be more stupid and ridiculous than we can imagine, because the present already is.