Comment by mrshadowgoose
3 years ago
At this point, I wouldn't give much credibility to anything OpenAI claims about their research plans.
The game theory behind AGI research is identical to that of nuclear weapons development. There exists a development gap (the size of which is unknowable ahead of time) where an actor that achieves AGI first, and plays their cards right, can permanently suppress all other AGI research.
Even if one's intentions are completely good, failure to be first could result in never being able to reach the finish line. It's absolutely in OpenAI's interest to conceal critical information, and mislead competing actors into thinking they don't have to move as quickly as they can.
>>>The game theory behind AGI research is identical to that of nuclear weapons development...
Nuclear powers have not been able to reliably suppress others from creating nuclear weapons. Why would we think the first AGI will suppress all others perfectly?
The first nuclear power (the United States) chose to not. Had they decided to be completely evil, they certainly could have used the threat of nuclear annihilation (and the act of it for non-compliers) to achieve that goal.
They didn't have strategic ICBMs from the get go. At the start they only had tactical nukes enough to scare the japanese into submission. And not enough to completely nuke someone like russia. Their nukes also required accessibility because they didn't have rockets. Deploying nukes over a country they were already heavily firebombing was a walk in the park compared to deploying them in places where another power has air superiority.
Even if they wanted to - they would definitely fail on that mission. Soviets had nukes just 4 years after Americans, and in these 4 years US just wouldn't produce enough fission materials to annihilate USSR, not talking about UK and France.
The people (and even military personnel) of the United States wouldn't have tolerated capriciously dropping atomic bombs on Moscow three years after we helped them defeat the Nazis. But perhaps something in the nature of AGI will allow its "discoverers" to act more unilaterally evil and with fewer fetters. An army of amoral robots would certainly remove a lot of checks on certain kinds of behavior.
6 replies →
The first true AGI will likely foom immediately.
I thought the same thing, they’ve not disclosed anything else, so why would the my be even slightly honest about this ?
When I see comments like this, I wonder about the personal morality of the poster and how they arrived at their worldview. It may be hard to beleive, but there are some advantages to truthfulness in this world.
There are lots of advantages, but it doesn't mean that all actors would automatically stick to the truthfulness. There are also lots of advantages in being untruthful. Evaluate both possibilities is a rational behavior, and definitely not a reason to question one's personal morality.
so you think open ai has been responsible and transparent so far ? You think cruising around Africa and other developing countries basically bribing people to sign up for “world coin” in exchange is trustworthy behaviour ? Sorry I don’t buy this message at all. I’ll stop here but not my kind of person that’s for sure.
4 replies →
The only "game theory" here is trying to convince people your software is good and important, so you can raise money and sell products.
Why would that be the case? If anything you would expect the first iteration of AGI either kept completely secret or end up leaked indirectly or directly negating any benefits. Also AGI without weapons is not a military threat.
AGI that can engage in cyberwarfare, propaganda campaigns, and social engineering can achieve some military goals nevertheless.
Maybe it will just want to turn in the TV and watch South Park or something ? Read the Qur’an? Who knows
1 reply →