← Back to context

Comment by nsoonhui

3 days ago

One thing I’m curious about is this: Ilya Sutskever wants to build Safe Superintelligence, but he keeps his company and research very secretive.

Given that building Safe Superintelligence is extraordinarily difficult — and no single person’s ideas or talents could ever be enough — how does secrecy serve that goal?

If he (or his employees) are actually exploring genuinely new, promising approaches to AGI, keeping them secret helps avoid a breakneck arms race like the one LLM vendors are currently engaged in.

Situations like that do not increase all participants' level of caution.

Doesn't sound like you listened to the interview. He addresses this and says he may make releases that would be otherwise held back because he believes it's important for developments to be seen by the public.

  • No reasonable person would do that! That is, if you had the key to AI, you wouldn't share it and you would do everything possible to prevent it's dissemination. Meanwhile you would use it to conquer the world! Bwahahahaaaah!