Comment by nickandbro

18 hours ago

Wow, if the benchmarks checkout with the vibes, this could almost be like a Deepseek moment with Chinese AI now being neck and neck with SOTA US lab made models

[flagged]

  • > Its not anywhere close

    Close to what, and how are you measuring?

    > nobody in the USA would be spending 7 figures on infrastructure for it

    Au contraire, if AI had a moat it would pay for itself. They're funneling capital into infrastructure because they know it can't.

    • You need the infrastructure to train and run it regardless though. Kimi is great but I'm not getting the same performance from it running it on my MacBook or a 3090 as it running on a H100 or a Grace Hopper supercomputer. Pretend you did have said moat. Why wouldn't you also books infrastructure to run it on?

      1 reply →

With the previous generation? Yes. With 10T mythos-level models? Not even close.