Comment by mohsen1
18 days ago
> available via the Gemini API in Google AI Studio and Vertex AI.
> Gemini 2.0, 2.0 Pro and 2.0 Pro Experimental, Gemini 2.0 Flash, Gemini 2.0 Flash Lite
3 different ways of accessing the API, more than 5 different but extremely similarly named models. Benchmarks only comparing to their own models.
Can't be more "Googley"!
They actually have two "studios"
Google AI Studio and Google Cloud Vertex AI Studio
And both have their own documentation, different ways of "tuning" the model.
Talk about shipping the org chart.
> Talk about shipping the org chart.
Good expression. I’ve been thinking about a way to say exactly this.
A pithy reworking of Conway's Law https://en.wikipedia.org/wiki/Conway%27s_law
1 reply →
I love this phrase so much.
Google still has some unsettled demons.
> Talk about shipping the org chart.
To be fair, Microsoft has shipped like five AI portals in the last two years. Maybe four — I don’t even know any more. I’ve lost track of the renames and product (re)launches.
They made a new one to unite them all: Microsoft Fabric.
https://xkcd.com/927/
2 replies →
not to mention all the Copilots....
I wonder what changelog of the two studio products tell us about internal org fights(strifes)?
I don't know why you're finding it confusing. There's Duff, Duff Lite and now there's also all-new Duff Dry.
I tend to prefer Duff Original Dry and Lite, but that’s just me
And all three filled from the same tube. That the widescreen version doesn't show because the top is cut off.
I think this is a good summary: https://storage.googleapis.com/gweb-developer-goog-blog-asse...
- Experimental™
- Preview™
- Coming soon™
Don't forget the OG, "Beta".
https://en.wikipedia.org/wiki/History_of_Gmail#Extended_beta...
- Generally Available? Available?? Simply showing a checkmark???
- In Experimental? Coming soon??
Make it make sense.
Working with google APIs is often an exercise in frustration. I like their base cloud offering the best actually, but their additional APIs can be all over the place. These AI related are the worst.
heuristic for Google Gemini API usage:
if the model name contains '-exp' or '-preview', then API version is 'v1alpha'
otherwise, use 'v1beta'
Honestly naming conventions in the AI world have been appalling regardless of the company
Google isn't even the worst in my opinion. From the top of my head
Anthropic:
Claude 1 Claude Instant 1 Claude 2 Claude Haiku 3 Claude Sonnet 3 Claude Opus 3 Claude Haiku 3.5 Claude Sonnet 3.5 Claude Sonnet 3.5v2
OpenAI:
GPT-3.5 GPT-4 GPT-4o-2024-08-06 GPT-4o GPT-4o-mini o1 o3-mini o1-mini
Fun times when you try to setup throughput provisioning.
It's
GPT-3.5 GPT-4 GPT-4-turbo GPT-4o-2024-08-06 GPT-4o GPT-4o-mini o1-preview o1 (low) o1 (medium) o1 (high) o1-mini o3-mini (low) o3-mini (medium) o3-mini (high)
At this pace we are going to get to USB 3.2 Gen 1 real fast.
I don't understand why if they're gonna use shorthands to make the tech seem cooler, they can't at least use mnemonic shorthands.
Imagine if it went like this:
> GPT-4o-2024-08-06 GPT-4o
From what I understand GPT-4o will map to the most recent generated model so it will change over time. The models with the date appended will not change over time.
Google is the least confusing to me. Old school version number and Pro is better than Flash which is fast and for "simple" stuff (which can be effortless intermediate level coding at this point).
OpenAI is crazy. There may be a day when we might have o5 that is reasoning and 5o that is not, and where they belong to different generations too, snd where "o" meant "Omni" despite o1-o3 not being audiovisual anymore like 4o.
Anthropic crazy too. Sonnets and Haikus, just why... and a 3.5 Sonnet that was released in October that was better than 3.5 Sonnet. (Not a typo) And no one knows why there never was a 3.5 Opus.
> And no one knows why there never was a 3.5 Opus.
If you read between the lines it's been pretty clear. The top labs are keeping the top models in house and use them to train the next generation (either SotA or faster/cheaper etc).
4o is a more advanced model than o1 or o3, right!?
2 replies →
Mistral vs mistral.rs, Llama and llama.cpp and ollama, groq and grok. It's all terrible.
Claude Sonnet 3.5...no, not that 3.5, the new 3.5. o3-mini, no not o2. yes there was o1, yes it's better than gpt-4o.
1 reply →
Clearly, the next step is to rename one to "Google Chat".
And kill it the 10th time
You missed the first sentence of the release:
>In December, we kicked off the agentic era by releasing an experimental version of Gemini 2.0 Flash
I guess I wasn't building AI agents in February last year.
Yeah some of us have been working on agents predominately for years now, but at least people are finally paying attention. Can't wait to be told how I'm following a hype cycle again.