Comment by motoxpro
9 days ago
That's incredible how much it broadly aligns with what has happened. Especially because it was before ChatGPT.
9 days ago
That's incredible how much it broadly aligns with what has happened. Especially because it was before ChatGPT.
Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?
This forum has been so behind for too long.
Sama has been saying this a decade now: “Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity” 2015 https://blog.samaltman.com/machine-intelligence-part-1
Hinton, Ilya, Dario Amodei, RLHF inventor, Deepmind founders. They all get it, which is why they’re the smart cookies in those positions.
First stage is denial, I get it, not easy to swallow the gravity of what’s coming.
>This forum has been so behind for too long.
There is a strong financial incentive for a lot of people on this site to deny they are at risk from it, or to deny what they are building has risk and they should have culpability from that.
People have been predicting the singularity to occur sometimes around 2030 and 2045 waaaay further back then 2015. And not just by enthusiasts, I dimly remember an interview with Richard Darkins from back in the day...
Though that doesn't mean that the current version of language models will ever achieve AGI, and I sincerely doubt they will. They'll likely be a component in the AI, but likely not the thing that "drives"
Vernor Vinge as much as anyone can be credited with the concept of the singularity. In his 1993 essay on it, he said he'd be surprised if it happened before 2005 or after 2030
https://edoras.sdsu.edu/~vinge/misc/singularity.html
1 reply →
> "Development of Superhuman machine intelligence is probably the greatest threat to the continued existence of humanity”
If that's really true, why is there such a big push to rapidly improve AI? I'm guessing OpenAI, Google, Anthropic, Apple, Meta, Boston Dynamics don't really believe this. They believe AI will make them billions. What is OpenAI's definition of AGI? A model that makes $100 billion?
Because they also believe the development of superhuman machine intelligence will probably be the greatest invention for humanity. The possible upsides and downsides are both staggeringly huge and uncertain.
You can also have prisoner’s dilemma where no single actor is capable of stopping AI’s advance
And why are Altman's words worth anything? Is he some sort of great thinker? Or a leading AI researcher, perhaps?
No. Altman is in his current position because he's highly effective at consolidating power and has friends in high places. That's it. Everything he says can be seen as marketing for the next power grab.
Altman did play some part in bringing ChatGPT about. I think the point is the people making AI or running companies making current AI are saying be wary.
In general it's worth weighting the opinions of people who are leaders in a field, about that field, over people who know little about it.
well, he did also have a an early (failed) YC startup - does that add cred?
> Will people finally wake up that the AGI X-Risk people have been right and we’re rapidly approaching a really fucking big deal?
OK, say I totally believe this. What, pray tell, are we supposed to do about it?
Don't you at least see the irony of quoting Sama's dire warnings about the development of AI, without at least mentioning that he is at the absolute forefront of the push to build this technology that can destroy all of humanity. It's like he's saying "This potion can destroy all of humanity if we make it" as he works faster and faster to figure out how to make it.
I mean, I get it, "if we don't build it, someone else will", but all of the discussion around "alignment" seems just blatantly laughable to me. If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?
While I'm skeptical on the timeline, if we do ever end up building super intelligence, the idea that we can control it is a pipe dream. We may not be toast (I mean, we're smarter than dogs, and we keep them around), but we won't be in control.
So if you truly believe super intelligent AI is coming, you may as well enjoy the view now, because there ain't nothing you or anyone else will be able to do to "save humanity" if or when it arrives.
I love this pattern, the oldest pattern.
There is nothing happening!
The thing that is happening is not important!
The thing that is happening is important, but it's too late to do anything about it!
Well, maybe if you had done something when we first started warning about this...
See also: Covid/Climate/Bird Flu/the news.
> If on one hand your goal is to build "super intelligence", i.e. way smarter than any human or group of humans, how do you expect to control that super intelligence when you're just acting at the middling level of human intelligence?
That's exactly what the true AGI X-Riskers think! Sama acknowledges the intense risk but thinks the path forward is inevitable anyway so hoping that building intelligence will give them the intelligence to solve alignment. The other camp, a la Yudkowsky, believe it's futile to just hope it gets solved without AGI capabilities first becoming more intelligent, powerful, and disregarding any of our wishes. And then we've ceded any control of our future to an uncaring system that treats us as a means to achieve its original goals like how an ant is in the way of a Google datacenter. I don't see how anyone who thinks "maybe stock number go up as your only goal is not the best way to make people happy", can miss this.
1 reply →
Political organization to force a stop to ongoing research? Protest outside OAI HQ? There are lots of thing we could, and many of us would, do if more people were actually convinced their life were in danger.
4 replies →
It's not something you need to worry about.
If we get the Singularity, it's overwhelmingly likely Jesus will return concurrently.
Though possibly only in AI form.
There's a pretty good summary of how well it has held up here, by the significance of each claim:
https://www.lesswrong.com/posts/u9Kr97di29CkMvjaj/evaluating...