← Back to context

Comment by zozbot234

17 hours ago

They compete with "mini" or "nano" model classes quite well given the price of inference. You'd need to "model hop" anyway, using Opus for everything is quite wasteful.

Now those aren't really "frontier models" now, are they.

  • They are on the frontier of local models, where the game is often to get the best bang for the buck. You can always scale model size and compute (Mythos, GPT Pro, Gemini DeepThink) to reach better outcomes, but that's not a very interesting strategy.

    • > They are on the frontier of local models

      That's not what anyone means when they say frontier models, don't change the definition. It's almost as bad as open weight being subsumed by open source when it comes to local models.