← Back to context

Comment by eigenvalue

2 years ago

But wouldn't the focus on "safety first" sort of preclude them from giving their researchers the unfettered right to publish their work however and whenever they see fit? Isn't the idea to basically try to solve the problems in secret and only release things when they have high confidence in the safety properties?

If I were a researcher, I think I'd care more about ensuring that I get credit for any important theoretical discoveries I make. This is something that LeCun is constantly stressing and I think people underestimate this drive. Of course, there might be enough researchers today who are sufficiently scared of bad AI safety outcomes that they're willing to subordinate their own ego and professional drive to the "greater good" of society (at least in their own mind).

If you're working on superintelligence I don't think you'd be worried about not getting credit due to a lack of publications, of all things. If it works, it's the sort of thing that gets you in the history books.

  • Not sure about that. It might get Ilya in the history books, and maybe some of the other high profile people he recruits early on, but a junior researcher/developer who makes a high impact contribution could easily get overlooked. Whereas if that person can have their name as lead author on a published paper, it makes it much easier to measure individual contributions.

    • There is a human cognitive limit to the detail in which we can analyze and understand history.

      This limit, just like our population count, will not outlast the singularity. I did the math a while back, and at the limit of available energy, the universe has comfortable room for something like 10^42 humans. Every single one of those humans will owe their existence to our civilization in general and the Superintelligence team in specific. There'll be enough fame to go around.