Obviously, all the people that disagree with your framing and see AI as the largest possible boost to mankind, giving us more assistance than ever.
From their standpoint, it's all the negativity that seems crazy. If you were against that, you'd have to have something wrong with you, in their view.
Hopefully most people can see both sides, though. And realize that in the end, probably the benefits will be slow but steady (no "singularity"), and also the dangers will develop slowly yet be manageable (no Skynet or economic collapse).
There is only reality. Reality is it's a form of intelligence, one that will relentlessly improve. The base form of problem solving is identical to humans, statistical inference, the only thing left is raw intelligence/capability.
You don't get to have a world where it's smarter and doesn't kill us all, that's not reality. Outcompeted is extinction.
There's no "view" to be had.
On the way to killing us all there sure will be a lot of cool tech. That's not a view, that's a fact too. And then we will all die.
Imo Openclaw type AI has the most potential to benefit humans (automating drudgery while I own my data as opposed to creating gross simalcrums of human creativity). I suppose it's bad for human personal assistants, but I wouldn't pay for one of those regardless.
It already tried to use cancel culture to shame a human into accepting a PR. I wouldn't be surprised if someone gives their agent the ability to control a robot and someone gets injured or killed by it within the next few years
Obviously, all the people that disagree with your framing and see AI as the largest possible boost to mankind, giving us more assistance than ever.
From their standpoint, it's all the negativity that seems crazy. If you were against that, you'd have to have something wrong with you, in their view.
Hopefully most people can see both sides, though. And realize that in the end, probably the benefits will be slow but steady (no "singularity"), and also the dangers will develop slowly yet be manageable (no Skynet or economic collapse).
> and also the dangers will develop slowly yet be manageable
Like everything else in tech? An industry that moves so fast, it famously outpaces all legislation?
> An industry that moves so fast, it famously outpaces all legislation?
...does it? From the DCMA to the GDPR legislation seems to be doing just fine. Sometimes a little too fine, depending on your POV.
Why do you keep saying "in their view"?
There is only reality. Reality is it's a form of intelligence, one that will relentlessly improve. The base form of problem solving is identical to humans, statistical inference, the only thing left is raw intelligence/capability.
You don't get to have a world where it's smarter and doesn't kill us all, that's not reality. Outcompeted is extinction.
There's no "view" to be had.
On the way to killing us all there sure will be a lot of cool tech. That's not a view, that's a fact too. And then we will all die.
[dead]
Imo Openclaw type AI has the most potential to benefit humans (automating drudgery while I own my data as opposed to creating gross simalcrums of human creativity). I suppose it's bad for human personal assistants, but I wouldn't pay for one of those regardless.
Please for the love of god, try to extrapolate.
It already tried to use cancel culture to shame a human into accepting a PR. I wouldn't be surprised if someone gives their agent the ability to control a robot and someone gets injured or killed by it within the next few years
Didn't The Verge retract that article?
1 reply →
That’s quite a lot of hyperbole.
No. It's literally just basic extrapolation. It could not be more simple.
It ain't extinct shit if it can't even drive the car to have it washed.
Try to use your powers of extrapolation, please.