Comment by temp123789246
1 day ago
OpenAI initially claimed that GPT-2 was too dangerous to release in 2019.
How many times will labs repeat the same absurd propaganda?
1 day ago
OpenAI initially claimed that GPT-2 was too dangerous to release in 2019.
How many times will labs repeat the same absurd propaganda?
The claim I remember was that releasing it would start an arms race for AGI, which I think it clearly did
GPT2 was definitely a risk, just not of the same magnitude. It would have (and did!) make social media bot farms way more convincing and widespread. There was specific worry about that being used to sway elections, which is why they held back the model.
Anthropic and OpenAI have very different cultures and ethos. Point to other times where anthropic has gone the way of cheap marketing tricks. Now look at openAI. Not even close.
Anthropic has done plenty of cheap marketing tricks as of late, see their recent non-functional C compiler that relied on a harness using gcc's entire test suite
It is functional. You can try it yourself or find third-party tests of it, even. Why do you think that it's a "cheap marketing trick" to test it on the GCC test suites?
Not surprising given that they dont even know why claude-code works as before or doesnt work [1] ie, there is no known theory of operation. Explains why they are afraid of it.
[1] https://news.ycombinator.com/item?id=47660925
1 reply →
Alternative view: GPT2 was indeed a risk to society, but we just keep raising the bar and "accepting" the risks.
OpenAI did not make the strong specific claims about GPT2's abilities that Anthropic is making about Claude Mythos.