← Back to context

Comment by skippyboxedhero

1 day ago

GPT-2, o1, Opus...been here so many times. The reason they do this is because they know it works (and they seem to specifically employ credulous people who are prone to believe AGI is right around the corner). There haven't been significant innovations, the code generated is still not good but the hype cycle has to retrigger.

I remember when OpenAI created the first thinking model with o1 and there were all these breathless posts on here hyperventilating about how the model had to be kept secret, how dangerous it was, etc.

Fell for it again award. All thinking does is burn output tokens for accuracy, it is the AI getting high on its own supply, this isn't innovation but it was supposed to super AGI. Not serious.

> All thinking does is burn output tokens for accuracy

“All that phenomenon X does is make a tradeoff of Y for Z”

It sounds like you’re indignant about it being called thinking, that’s fine, but surely you can realize that the mechanism you’re criticizing actually works really well?

>I remember when OpenAI created the first thinking model with o1 and there were all these breathless posts on here hyperventilating about how the model had to be kept secret, how dangerous it was, etc.

I've read that about Llama and Stable Diffusion. AI doomers are, and always have been, retarded.

Lol you haven't used a model since GPT2 is what it sounds like.

  • Just checked my subscription start date for Anthropic. September 2023, I believe before they announced public launch.

    Sorry kid.

    • Genuine question - if you don't think the models are improved or that the code is any good, why do you still have a subscription?

      You must see some value, or are you in a situation where you're required to test / use it, eg to report on it or required by employer?

      (I would disagree about the code, the benefits seem obvious to me. But I'm still curious why others would disagree, especially after actively using them for years.)

      2 replies →