Comment by ben_w

7 months ago

Really?

The part saying "experiment: while we are not sure" doesn't strike you as this being "we don't know if this is dangerous or not, so we're playing it safe while we figure this out"?

To me this is them figuring out what "general purpose AI testing" even looks like in the first place.

And there's quite a lot of people who look at public LLMs today and think their ability to "generate deceptive, biased, or abusive language at scale" means they should not have been released, i.e. that those saying it was too dangerous (even if it was the press rather than the researchers looking at how their models were used in practice) were correct, it's not all one-sided arguments from people who want uncensored models and think that the risks are overblown.

Yea that’s fair. I think I was reacting to the strength of your initial statement. Reading that press release and writing a piece stating that OpenAI thinks GPT-2 is too dangerous to release feels reasonable to me. But it is less accurate than saying that OpenAI thinks GPT-2 _might_ be too dangerous to release.

And I agree with your basic premise. The dangers imo are significantly more nuanced than most people make them out to be.