← Back to context

Comment by Maxion

2 years ago

Yep, the announcement is quite cheeky.

Ultra is out sometime next year, with GPT-4 level capability.

Pro is out now (?) with ??? level capability.

Pro benchmarks are here: https://storage.googleapis.com/deepmind-media/gemini/gemini_...

Sadly it's 3.5 quality, :(

  • Lol that's why it's hidden in a PDF.

    They basically announced GPT 3.5, then. Big woop, by the time Ultra is out GPT-5 is probably also out.

    • Isn't having GPT 3.5 still a pretty big deal? Obviously they are behind but does anyone else offer that?

      3.5 is still highly capable and Google investing a lot into making it multi modal combined with potential integration with their other products makes it quite valuable. Not everyone likes having to switch to ChatGPT for queries.

      7 replies →

  • Table 2 indicates Pro is generally closer to 4 than 3.5 and Ultra is on par with 4.

    • If you think eval numbers mean a model is close to 4, then you clearly haven't been scarred by the legions of open source models which claim 4-level evals but clearly struggle to actually perform challenging work as soon as you start testing

      Perhaps Gemini is different and Google has tapped into their own OpenAI-like secret sauce, but I'm not holding my breath

    • Ehhh not really, it even loses to 3.5 on 2/8 tests. For me it feels pretty lackluster considering I'm using GPT-4 probably close to 100 times or more a day and it would be a huge downgrade.

      3 replies →