← Back to context

Comment by leetharris

17 days ago

These names are unbelievably bad. Flash, Flash-Lite? How do these AI companies keep doing this?

Sonnet 3.5 v2

o3-mini-high

Gemini Flash-Lite

It's like a competition to see who can make the goofiest naming conventions.

Regarding model quality, we experiment with Google models constantly at Rev and they are consistently the worst of all the major players. They always benchmark well and consistently fail in real tasks. If this is just a small update to the gemini-exp-1206 model, then I think they will still be in last place.

Haiku/sonnet/opus are easily the best named models imo.

  • as a person who thought they were arbitrary names when i first discovered them and spent an hour trying to figure out the difference i disagree. it gets even more confusion when you realize that opus, which according to their silly naming scheme is supposed to be the biggest and best model they offer is seemingly abandoned and that title has been given to sonnet which is supposed to be the middle of the road model.

  • I used to agree before one of the "Sonnet" models overtook the best "Opus" model

> It's like a competition to see who can make the goofiest naming conventions.

I'm still waiting for one of them to overflow from version 360 down to One.

I completely agree! I'm currently using Gemini's "2.0 Flash Thinking Experimental with apps" model.

Deepseek Reasoner is a pretty good name for a pretty good model I think.. pity the performance is so terrible via the api

What do you use LLMs for at rev? And separate question, how does your diarization compare to deepgram or assembly AI.