Comment by EMM_386

2 years ago

I don't see any way of stopping this. If the risks are as great as some claim, that is not a great situation.

So now we have an executive order with a very limited scope. Tomorrow, suddenly the world's most powerful AI is now announced, not in the United States.

Ok, so now we want to make sure that is safe. An executive order from the White House has no affect on it. This can continue, until it's decided the stakes are getting too high. Then I suppose you could have the United Nations start trying to figure out how to maintain safety. Of course, there will be countries that will simply ignore anything that is decided, hiding increasingly advanced systems with unknown purposes. It will probably take longer for nations to determine a what defines "human values" so that AI respects them then it does to create another leap in AI capabilities.

Then there would simply be more concerns coming into play. Countries will go to war to try to stop other countries nuclear ambitions, is it possible that AI poses enough of a threat that similar problems arise?

Basically, if AI is as potentially large a threat as we are envisioning, there are so many different potential threats that trying to solve them while trying to stay ahead of pace of advancements seems unrealistic. While someone is trying to ensure we don't end up with systems going rogue, someone else needs to handle the fact that we can't have AI creating certain things. The AI systems are not allowed to tinker with viruses, as an example, where unexpected creations can lead to extremely bad situations.

The initial stages of this have already begun, and time is ticking. I guess we'll see.