← Back to context

Comment by temp123789246

1 day ago

OpenAI initially claimed that GPT-2 was too dangerous to release in 2019.

How many times will labs repeat the same absurd propaganda?

The claim I remember was that releasing it would start an arms race for AGI, which I think it clearly did

GPT2 was definitely a risk, just not of the same magnitude. It would have (and did!) make social media bot farms way more convincing and widespread. There was specific worry about that being used to sway elections, which is why they held back the model.

Anthropic and OpenAI have very different cultures and ethos. Point to other times where anthropic has gone the way of cheap marketing tricks. Now look at openAI. Not even close.

  • Anthropic has done plenty of cheap marketing tricks as of late, see their recent non-functional C compiler that relied on a harness using gcc's entire test suite

Alternative view: GPT2 was indeed a risk to society, but we just keep raising the bar and "accepting" the risks.

OpenAI did not make the strong specific claims about GPT2's abilities that Anthropic is making about Claude Mythos.