← Back to context

Comment by 2ndorderthought

7 hours ago

"my model is the most dangerous"

"No mine is the most dangerous"

"Nuh uh mine is"

"Mine could kill everyone!"

"Mine could do it faster!"

"Prove it!!!"

This is where we are

Yeah I guess two companies who would otherwise be considered going for bankruptcy have models too expensive to run. As they don't see themselves making money any time soon, they have to turn every future model into a weird fascination.

  • There's a story to tell in that: 1) Google has a transformer-based AI that hallucinates too much to release 2) OpenAI replicates the tech then YOLOs it 3) Everyone says: look how Google is getting left behind! Google thinks: the second mouse gets the cheese. 4) Google gets the cheese, OpenAI is absorbed by Microsoft or just disappears (or both).

    • Certainly could turn out that way.

      TPUs were their real moat. All that capacity used throughout their suite of products on non-chatbot features, ready to rip for consumers once soon as somebody else opened the floodgates to the public.

      Now all their competitors lose money on every token paying their cloud providers (of course it's funny money, maybe they're just giving the cloud providers equity) while Google is sitting calmly over there, actually owning everything they need for any eventuality, and beholden to nobody.

  • China’s DeepSeek prices new V4 AI model at 97% below OpenAI’s GPT-5.5

    Did somebody say that Elon is stealthly funding: Seven lawsuits filed against OpenAI by families of Canada mass-shooting victims

    As always, when the going get's tough, the tough ultimately resort to lawsuits.

    • If the difference is that large, it seems plausible to me that the Chinese models are subsidized in order to gain market share, this is not exactly the first time the Chinese government has done so (or at least been rumoured to have done so).

      You should assume that everyone has a hidden agenda when money is involved.

      7 replies →

    • It’s their promo price till the end of May. It’s also not nearly as good as 5.5. I’ve had 3 different tasks just this week that deepseek has failed at that 5.5 does perfectly.

Remember that they have been saying that since gpt2.

I didn't think crying could be such a successful business model.

  • People keep on mentioning gpt2, but it's worth recalling that back in 2019 it was basically the first model that was capable of zero-shot generation of coherent multi-paragraph text. Having it write security exploits like Mythos wasn't even on the radar. Rather, the concerns were about misuse and societal implications, which in retrospect were pretty prescient: https://openai.com/index/gpt-2-6-month-follow-up/

  • It's just "thinking past the sale" which they've been doing forever.

    i.e. "I'm so worried that our capped for-profit structure will limit your returns when we make over 1 Trillion in profit".

Marketing stunts. The equivalent of holding a line outside a popular bar.

  • Given the USG has asked Anthropic not to release Mythos I'd wager it's more than a marketing stunt.

    • It can be both and I don't know how much I would trust the USG as the canary in the coal mine given their technical readiness typically seems low across most institutions in that they are probably more exposed because they haven't shored up their systems.

Can't wait for the Chinese models to completely wipe the floor with them in 6 months.

  • I doubt it. By not releasing it, Chinese companies will be unable to break TOS and use it to acquire high quality training data...which, I suspect, is how they've kept pace

    • Z.AI, Moonshot, DeepSeek all have a pipeline of data of their own now due to capturing a slice of the market through cheap tokens. It's not impossible to imagine that they might share the data too if the CCP thinks that will help their AI strategy.

      2 replies →

It's like that phone call in The Big Short where Goldman suddenly change their mind once they hold a position.

Yup, we are somewhere between "my model can beat up your model" and "you wouldn't know my model, it lives in Canada".

This is the world we live in.

I am convinced the models are not as good as they say, but everyone benefits from the continued AI hype, so nobody says so.

These models demonstrably have good vulnerability research capabilities.

I'm sure their marketing department is ecstatic but you guys are far more hype-based than what you're calling out.