← Back to context

Comment by octoberfranklin

3 days ago

> Academia has been marginalized from meaningfully participating in AI progress and industry labs have stopped publishing

Exactly like semiconductor wafer processing.

If anyone believes they're close to a generalization end-game wrt to AI capabilities, it makes no sense to do anything that could impact their advantage, by enabling others to compete. Collaboration makes sense on timeframes that don't imply zero-sum games.

Board games like the Settlers of Catan are good examples of the behavior— concretely the start of the game when everyone trades vs the end of the game when if you suspect someone wins it makes little sense to trade unless, you think it will help you win first.

  • > Collaboration makes sense on timeframes that don't imply zero-sum games.

    People are fooling themselves if they think AGI will be zero sum. Even if only one group somehow miraculously develops it, there will immediately be fast followers. And, the more likely scenario is more than one group would independently pull it off - if it's even possible.

    • Maybe, but at least Open AI, XAI and any Bostrom believer thinks this is the case.

      Ilya Sutskever (Sep 20, 2017)

      > The goal of OpenAI is to make the future good and to avoid an AGI dictatorship. You are concerned that Demis could create an AGI dictatorship. So do we. So it is a bad idea to create a structure where you could become a dictator if you chose to, especially given that we can create some other structure that avoids this possibility.

      Nick Bostrom - Decisive Strategic Advantage https://www.lesswrong.com/posts/vkjWGJrFWBnzHtxrw/superintel...

      1 reply →

    • >if it's even possible.

      Why do people keep repeating this. The only way artificial intelligence is impossible is if intelligence is impossible. And we're here so that pretty much removes that impediment.