← Back to context

Comment by jsheard

7 months ago

> It’s really hard to read this as anything other than a desperate attempt to pretend the ai is more capable than it really is.

Tale as old as time, they've been doing this since GPT-2 which they said was "too dangerous to release".

For thousands of years, people believed that men and women had a different number of ribs. Never bothered to count them.

"""Release strategy

Due to concerns about large language models being used to generate deceptive, biased, or abusive language at scale, we are only releasing a much smaller version of GPT-2 along with sampling code .

This decision, as well as our discussion of it, is an experiment: while we are not sure that it is the right decision today … """ - https://openai.com/index/better-language-models/

It was the news reporting that it was "too dangerous".

If anyone at OpenAI used that description publicly, it's not anywhere I've been able to find it.

  • That quote says to me very clearly “we think it’s too dangerous to release” and specifies the reasons why. Then goes on to say “we actually think it’s so dangerous to release we’re just giving you a sample”. I don’t know how else you could read that quote.

    • Really?

      The part saying "experiment: while we are not sure" doesn't strike you as this being "we don't know if this is dangerous or not, so we're playing it safe while we figure this out"?

      To me this is them figuring out what "general purpose AI testing" even looks like in the first place.

      And there's quite a lot of people who look at public LLMs today and think their ability to "generate deceptive, biased, or abusive language at scale" means they should not have been released, i.e. that those saying it was too dangerous (even if it was the press rather than the researchers looking at how their models were used in practice) were correct, it's not all one-sided arguments from people who want uncensored models and think that the risks are overblown.

      1 reply →

I talked to a Palantir guy at a conference once and he literally told me "I'm happy when the media hypes us up like a James Bond villain because every time the stock price goes up, in reality we mostly just aggregate and clean up data"

This is the psychology of every tech hype cycle

  • Tech is by no means alone with this trick. Every press release is free adverticement and should be used like it.

"Please please please make AI safety legislation so we won't have real competitors."