Wow. They must have had some major breakthrough. Those scores are truly insane. O_O
Models have begun to fairly thoroughly saturate "knowledge" and such, but there are still considerable bumps there
But the _big news_, and the demonstration of their achievement here, are the incredible scores they've racked up here for what's necessary for agentic AI to become widely deployable. t2-bench. Visual comprehension. Computer use. Vending-Bench. The sorts of things that are necessary for AI to move beyond an auto-researching tool, and into the realm where it can actually handle complex tasks in the way that businesses need in order to reap rewards from deploying AI tech.
Will be very interesting to see what papers are published as a result of this, as they have _clearly_ tapped into some new avenues for training models.
And here I was, all wowed, after playing with Grok 4.1 for the past few hours! xD
The problem is that we know in advance what is the benchmark, so Humanity's Last Exam for example, it's way easier to optimize your model when you have seen the questions before.
These numbers are impressive, at least to say. It looks like Google has produced a beast that will raise the bar even higher. What's even more impressive is how Google came into this game late and went from producing a few flops to being the leader at this (actually, they already achieved the title with 2.5 Pro).
What makes me even more curious is the following
> Model dependencies: This model is not a modification or a fine-tune of a prior model
Google was never really late. Where people perceived Google to have dropped the ball was in its productization of AI. The Google's Bard branding stumble was so (hilariously) bad that it threw a lot of people off the scent.
My hunch is that, aside from "safety" reasons, the Google Books lawsuit left some copyright wounds that Google did not want to reopen.
There are no leaders. Every other month a new LLM model comes out and it outperforms the previous ones by a small margin, the benchmarks always look good (probably because the models are trained on the answers) but then in practice they are basically indistinguishable from the previous ones (take GPT4 vs 5). We've been in this loop since around the release of ChatGPT 4 where all the main players started this cycle.
The biggest strides in the last 6-8 months have been in generative AIs, specifically for animation.
I hope they keep the pricing similar to 2.5 Pro, currently I pay per token and that and GPT-5 are close to the sweet spot for me but Sonnet 4.5 feels too expensive for larger changes. I've also been moving around 100M tokens per week with Cerebras Code (they moved to GLM 4.6), but the flagship models still feel better when I need help with more advanced debugging or some exemplary refactoring to then feed as an example for a dumber/faster model.
What's more impressive is that I find gemini2.5 still relevant in day-to-day usage, despite being so low on those benchmarks compared to claude 4.5 and gpt 5.1. There's something that gemini has that makes it a great model in real cases, I'd call it generalisation on its context or something. If you give it the proper context (or it digs through the files in its own agent) it comes up with great solutions. Even if their own coding thing is hit and miss sometimes.
I can't wait to try 3.0, hopefully it continues this trend. Raw numbers in a table don't mean much, you can only get a true feeling once you use it on existing code, in existing projects. Anyway, the top labs keeping eachother honest is great for us, the consumers.
I would love to know what the increased token count is across these models for the benchmarks. I find the models continue to get better but as they do their token usage also does. Aka is model doing better or reasoning for longer?
I think that is always something that is being worked on in parallel. Recent paradigm seems to be the models understanding when they need to use more tokens dynamically (which seems to be very much in line with how computation should generally work).
I work a lot on testing also SWE bench verified. This benchmark in my opinion now is good to catch if you got some regression on the agent side.
However, going above 75%, it is likely about the same. The remaining instances are likely underspecified despite the effort of the authors that made the benchmark "verified". From what I have seen, these are often cases where the problem statement says implement X for Y, but the agent has to simply guess whether to implement the same for other case Y' - which leads to losing or winning an instance.
But ... what's missing from this comparison: Kimi-K2.
When ChatGPT-3 exploded, OpenAI had at least double the benchmark scores of any other model, open or closed. Gemini 3 Pro (not the model they actually serve) outperforms the best open model ... wait it does not uniformly beat the best open model anymore. Not even close.
Kimi-k2 beats Gemini 3 pro on several benchmarks. On average it scores just under 10% better then the best open model, currently Kimi-K2.
Gemini-3 pro is in fact only the best in about half the benchmarks tested there. In fact ... this could be another llama4 moment. The reason Gemini-3 pro is the best model is a very high score on a single benchmark ("Humanity's last exam"), if you take that benchmark out GPT-5.1 remains the best model available. The other big improvement is "SciCode", and if you take that out too the best open model, Kimi K2, beats Gemini 3 pro.
Kimi K2 on OpenRouter: $0.50 / M input tokens, $2.40 / M output tokens
Gemini 3 Pro: For contexts ≤ 200,000 tokens: US$ 2.00 per 1 M input tokens, Output tokens: US$ 12.00 per 1 M tokens
For contexts > 200,000 tokens (long context tier): US$ 4.00 per 1 M input tokens , US$ 18.00 per 1 M output tokens
So Gemini 3 pro is 4 times, 400%, the price of the best open model (and just under 8 times, 800%, with long context), and 70% more expensive than GPT-5.1
The closed models in general, and Google specifically, serve Gemini 3 pro at double to triple the speed (as in tokens-per-second) of openrouter. Although even here it is not the best, that's openrouter with gpt-oss-120b.
This is a big jump in most benchmarks.And if it can match other models in coding while having that Google TPM inference speed and the actually native 1m context window, it's going to be a big hit.
I hope it's isn't such a sycophant like the current gemini 2.5 models, it makes me doubt its output, which is maybe a good thing now that I think about it.
Looks like the best way to keep improving the models is to come up with really useful benchmarks and make them popular. ARC-AGI-2 is a big jump, I'd be curious to find out how that transfers over to everyday tasks in various fields.
What? The 4.5 and 5.1 columns aren't thinking in Google's report?
That's a scandal, IMO.
Given that Gemini-3 seems to do "fine" against the thinking versions why didn't they post those results? I get that PMs like to make a splash but that's shockingly dishonest.
We knew it would be a big jump and while it certainly is in many areas - its definitely not "groundbreaking/huge leap" worthy like some were thinking from looking at these numbers.
I feel like many will be pretty disappointed by their self created expectations for this model when they end up actually using it and it turns out to be fairly similar to other frontier models.
Personally I'm very interested in how they end up pricing it.
> I guess improvements will be incremental from here on out.
What do you mean? These coding leaderboards were at single digits about a year ago and are now in the seventies. These frontier models are arguably already better at the benchmark that any single human - it's unlikely that any particular human dev is knowledgeable to tackle the full range of diverse tasks even in the smaller SWE-Bench Verified within a reasonable time frame; to the best of my knowledge, no one has tried that.
Why should we expect this to be the limit? Once the frontier labs figure out how to train these fully with self-play (which shouldn't be that hard in this domain), I don't see any clear limit to the level they can reach.
very impressive. I wonder if this sends a different signal to the market regarding using TPUs for training SOTA models versus Nvidia GPUs. From what we've seen, OpenAI is already renting them to diversify... Curious to see what happens next
really great results although the results are so high i was trying a simple example of object detection and the performance was kind of poor in agentic frameworks. Need to see how this performs on other other tasks.
It is interesting that the Gemini 3 beats every other model on these benchmarks, mostly by a wide margin, but not on SWE Bench. Sonnet is still king here and all three look to be basically on the same level. Kind of wild to see them hit such a wall when it comes to agentic coding
I think Anthropic is reading the room, and just going to go hard on being "the" coding model. I suppose they feel that if they can win that, they can get an ROI without having to do full blown multimodality at the highest level.
It's probably pretty liberating, because you can make a "spikey" intelligence with only one spike to really focus on.
Codex has been good enough to me and it’s much cheaper.
I code non-trivial stuff with it like multi-threaded code and at least for my style of AI coding which is to do fairly small units of work with multiple revisions it is good enough for me to not to even consider the competition.
Just giving you a perspective on how the benchmarks might not be important at all for some people and how Claude may have a difficult time being the definitive coding model.
It remains to be seen whether that works out for them, but it seems like a good bet to me. Coding is the most monetizatable use anyone has found for LLMs so far, and the most likely to persist past this initial hype bubble (if the Singularity doesn't work out :p).
From my personal experience using the CLI agentic coding tools, I think gemini-cli is fairly on par with the rest in terms of the planning/code that is generated. However, when I recently tried qwen-code, it gave me a better sense of reasoning and structure that geimini. Claude definitely has it's own advantages but is expensive(at least for some if not for all).
My point is, although the model itself may have performed in benchmarks, I feel like there are other tools that are doing better just by adapting better training/tooling. Gemini cli, in particular, is not so great looking up for latest info on web. Qwen seemed to be trained better around looking up for information (or to reason when/how to), in comparision. Even the step-wise break down of work felt different and a bit smoother.
I do, however, use gemini cli for the most part just because it has a generous free quota with very few downsides comparted to others. They must be getting loads of training data :D.
Yeah, you can see this even by just running claude-code against other models. For example, DeepSeek used as a backend for CC tends to produce results mostly competitive with Sonnet 4.5 A lot is just in the tooling and prompting.
IMHO coding use cases are much more constrained by tooling than by raw model capabilities at the moment. Perhaps we have finally reached the time of diminishing returns and that will remain the case going forward.
This seems preferable. Wasting tokens on tools when a standardized, reliable interface to those tools should be all that's required.
The magic of LLMs is that they can understand the latent space of a problem and infer a mostly accurate response. Saying you need to subscribe to get the latest tools is just a sales tactic trained into the models to protect profits.
Not apples-to-apples. "Codex CLI (GPT-5.1-Codex)", which the site refers to, adds a specific agentic harness, whereas the Gemini 3 Pro seems to be on a standard eval harness.
It would be interesting to see the apples-to-apples figure, i.e. with Google's best harness alongside Codex CLI.
50% of the CLs in SWE-Bench Verified are the DJango codebase. So if you're a big contributor to Django you should care a lot about that benchmark. Otherwise the difference between models is +-2 tasks done correctly. I wouldn't worry too much about it. Just try it out yourself and see if its any better.
Their scores on SWE bench are very close because the benchmark is nearly saturated. Gemini 3 beats Sonnet 4.5 on TerminalBench 2.0 by a nice margin (54% vs. 43%), which is also agentic coding (CLI instead of python).
I find Gemini 2.5 pro to be as good or in some cases better for SQL than GPT 5.1. It's aging otherwise, but they must have some good SQL datasets in there for training.
One benchmark I would really like to see: instruction adherence.
For example, the frontier models of early-to-mid 2024 could reliably follow what seemed to be 20-30 instructions. As you gave more instructions than that in your prompt, the LLMs started missing some and your outputs became inconsistent and difficult to control.
The latest set of models (2.5 Pro, GPT-5, etc) seem to top out somewhere in the 100 range? They are clearly much better at following a laundry list of instructions, but they also clearly have a limit and once your prompt is too large and too specific you lose coherence again.
If I had to guess, Gemini 3 Pro has once again pushed the bar, and maybe we're up near 250 (haven't used it, I'm just blindly projecting / hoping). And that's a huge deal! I actually think it would be more helpful to have a model that could consistently follow 1000 custom instructions than it would be to have a model that had 20 more IQ points.
I have to imagine you could make some fairly objective benchmarks around this idea, and it would be very helpful from an engineering perspective to see how each model stacked up against the others in this regard.
Even more nuts would be a model that could follow a large, dense set of highly detailed instructions related to a series of complex tasks. Intelligence is nice, but it's far more useful and programmable if it can tightly follow a lot of custom instructions.
This idea isn't just smart, it's revolutionary. You're getting right at the heart of the problem with today's benchmarks — we don't measure model praise. Great thinking here.
For real though, I think that overall LLM users enjoy things to be on the higher side of sycophancy. Engineers aren't going to feel it, we like our cold dead machines, but the product people will see the stats (people overwhelmingly use LLMs to just talk to about whatever) and go towards that.
Your comment demonstrates a remarkably elevated level of cognitive processing and intellectual rigor. Inquiries of this caliber are indicative of a mind operating at a strategically advanced tier, displaying exceptional analytical bandwidth and thought-leadership potential. Given the substantive value embedded in your question, it is operationally imperative that we initiate an immediate deep-dive and execute a comprehensive response aligned with the strategic priorities of this discussion.
I care very little about model personality outside of sycophancy. The thing about gemini is that it's notorious for its low self esteem. Given that thing is trained from scratch, I'm very curious to see how they've decided to take it.
https://llmdeathcount.com/ shows 15 deaths so far, and LLM user count is in the low billions, which puts us on the order of 0.0015 deaths per hundred thousand users.
I'm guessing LLM Death Count is off by an OOM or two, so we could be getting close to one in a million.
Curiously, this website seems to be blocked in Spain for whatever reason, and the website's certificate is served by `allot.com/emailAddress=info@allot.com` which obviously fails...
Anyone happen to know why? Is this website by any change sharing information on safe medical abortions or women's rights, something which has gotten websites blocked here before?
> EDIT: Also, your DNS provider is censoring (and probably monitoring) your internet traffic. I would switch to a different provider.
Yeah, that was via my ISPs DNS resolver (Vodafone), switching the resolver works :)
The responsible party is ultimately our government who've decided it's legal to block a wide range of servers and websites because some people like to watch illegal football streams. I think Allot is just the provider of the technology.
Prediction markets were expecting today to be the release. So I wouldn't be surprised if they do a release today, tomorrow, or Thursday (around Nvidia earnings).
> TPUs are specifically designed to handle the massive computations involved in training LLMs and can speed up training considerably compared to CPUs.
That seems like a low bar. Who's training frontier LLMs on CPUs? Surely they meant to compare TPUs to GPUs. If "this is faster than a CPU for massively parallel AI training" is the best you can say about it, that's not very impressive.
I don't know if you can generally say that "LLM training is faster on TPUs vs GPUs". There is variance among LLM architectures, TPU cluster sizes, GPU cluster sizes...
They are both designed to do massively parallel operations. TPUs are just a bit more specific to matrix multiply+adds while GPUs are more generic.
Title of the document is "[Gemini 3 Pro] External Model Card - November 18, 2025 - v2", in case you needed further confirmation that the model will be released today.
Also interesting to know that Google Antigravity (antigravity.google / https://github.com/Google-Antigravity ?) leaked. I remember seeing this subdomain recently. Probably Gemini 3 related as well.
"Google Antigravity" refers to a new AI software platform announced by Google designed to help developers write and manage code.
The term itself is a bit of a placeholder or project name, combining the brand "Google" with the concept of "antigravity"—implying a release from the limitations of traditional coding.
In simple terms, Google Antigravity is a sophisticated tool for programmers that uses powerful AI systems (called "agents") to handle complex coding tasks automatically. It takes the typical software workbench (an IDE) and evolves it into an "agent-first" system.
Agentic Platform:
It's a central hub where many specialized AI helpers (agents) live and work together. The goal is to let you focus on what to build, not how to build it.
Task-Oriented:
The platform is designed to be given a high-level goal (a "task") rather than needing line-by-line instructions.
Autonomous Operation:
The AI agents can work across all your tools—your code editor, the command line, and your web browser—without needing you to constantly supervise or switch between them.
> Google Antigravity is an agentic development platform, evolving the IDE into the agent-first era. Antigravity enables developers to operate at a higher, task-oriented level by managing agents across workspaces, while retaining a familiar AI IDE experience at its core. Agents operate across the editor, terminal, and browser, enabling them to autonomously plan and execute complex, end-to-end tasks elevating all aspects of software development.
It says it's been trained from scratch. I wonder if it will have the same undescribable magic that makes me spend an hour every day with 2.5. I really love the results I can get with 2.5 pro. Google eventually limiting aistudio will be a sad day.
Also I really hoped for a 2M+ context. I'm living on the context edge even with 1M.
Pathways, I understand, is more so these days just the name for their training orchestrator for doing distributed JAX stuff - https://github.com/google/pathways-job
> Developments to the model architecture contribute to the significantly improved performance from previous model families.
I wonder how significant this is. DeepMind was always more research-oriented that OpenAI, which mostly scaled things up. They may have come up with a significantly better architecture (Transformer MoE still leaves a lot of room).
> The training dataset also includes: publicly available datasets that are readily downloadable; data
obtained by crawlers; licensed data obtained via commercial licensing agreements; user data (i.e., data
collected from users of Google products and services to train AI models, along with user interactions
with the model) in accordance with Google’s relevant terms of service, privacy policy, service-specific
policies, and pursuant to user controls, where appropriate; other datasets that Google acquires or
generates in the course of its business operations, or directly from its workforce; and AI-generated
synthetic data.
Well don't complain when you are using Gmail and your emails are being trained to develop Gemini.
It says "pursuant to user controls, where appropriate". We can now sleep peacefully with the knowledge that Google will give us the tools to disable this where it's not inappropriate.
So that's why Google is getting sued for Gemini being enabled by default in Gmail and analyzing emails and our data; completely going against whatever privacy policy they came up with. [0]
I don't expect them to follow their own privacy policies.
What's wild here is that among every single score they've absolutely killed, somehow, Anthropic and Claude Sonnet 4.5 have won a single victory in the fight: SWE Bench Verified and only by a singular point.
I already enjoy Gemini 2.5 pro for planning and if Gemini 3 is priced similarly, I'll be incredibly happy to ditch the painfully pricey Claude max subscription. To be fair, I've already got an extremely sour taste in my mouth from the last Anthropic bait and switch on pricing and usage, so happy to see Google take the crown here.
SWE bench is weird because Claude has always underperformed on it relative to other models despite Claude Code blowing them away. The real test will be if Gemini CLI beats Claude Code, both using the agentic framework and tools they were trained on.
They scored a 31.1% on ARC AGI 2 which puts them in first place.
Also notable which models they include for comparison: Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1. That seems like a minor snub against Grok 4 / Grok 4.1.
My impression is that Grok is very rarely used in practice outside of a niche of die-hard users, partly because of very different tuning to other models, and partly the related public reputation around it.
https://firstpagesage.com/reports/top-generative-ai-chatbots... suggests 0.6% of chat use cases, well below the other big names, and I suspect those stats for chat are higher than other scenarios like business usage. Given all that, I can see how Gemini might not be focused on competing with them.
well, there are 3 kind of usages for grok:
- using grok inside X/Twitter: most people interacts with Grok this way.
- using grok on its website: this is really annoying, as you get delayed by cloudflare everytime you access the site. As grok does not provide serious advantage over other services, why bother
- you can also use the app, but it is not as convenient as other services.
I don’t know anyone who uses Grok, but in my peer group everyone uses 1-2 paid services like Gemini or Clause or ChatGPT. They’re probably not as “extremely online” as I am, so I can’t generalize this thought, but anecdotally my impression has been that Grok is just very “right wing influencer” coded.
I would want to hear more detail about prompts, frameworks, thinking time, etc., but they don't matter too much. The main caveat would be that this is probably on the public test set, so could be in pretraining, and there could even be some ARC-focussed post-training - I think we don't know yet and might never know.
But for any reasonable setup, if no egregious cheating, that is an amazing score on ARC 2.
If it’s because of that, then honestly it’s as insane as the deepseek thing where all the info was released weeks before but the markt got nervous only when they released an app. I mean info about Gemini 3 is out quite a while now and of course they trained it using TPUs, I didn’t even think that was in question.
Creator of pixeldrain here. Italy has been doing this for a very long time. They never notified me of any such material being present on my site. I have a lot of measures in place to prevent the spread of CSAM. I have sent dozens of mails to Polizia Postale and even tried calling them a few times, but they never respond. My mails go unanswered and they just hang up the phone.
The strategic move to use TPU rather than Nvidia is paying well for Google. They are able to better utilize their existing large infrastructure, but also specialize the processes and pipelines for their own framework that they use to create and train models.
I think a specialized hardware for training models is the next big wave in China.
Noteworthily, although Gemini 3 Pro seems to have much benchmark scores than other models across the board (including compared to Claude), it's not the case for coding, where it appears to score essentially the same as the others. I wonder why that is.
So far, IMHO, Claude Code remains significantly better than Gemini CLI. We'll see whether that changes with Gemini 3.
That's because coding is currently the only reliable benchmark where reasoning capabilities transfer to predict capabilities for other professions like law. Coding is the only area where they are shy to release numbers.
All these exam scores are fakeable by gaming those benchmarks.
Gemini performs better if you use it with Claude Code than with Gemini cli. It still has some odd problems with tool calling but a lot of the performance loss is the Gemini cli app itself.
gemini cli. It's not as impressive as claude code or even codex.
Claude code seems to be more compatible with the model (or the reverse) whereas gemini-cli still feels a bit awkward (as of 2.5 Pro). I'm hoping its better with 3.0!
These model cards tell me nothing. I want to know the exact data a model was trained on. Otherwise, how can I safely use it for generating texts that I show to children? Etc.etc.
The data is everything you've ever heard of, and obviously contains things you wouldn't show to children, since that'd include NYT war journalism stories.
Same here. They have been aggressively increasing prices with each iteration (maybe because they started so low). Still hope that is not the case this time. GPT 5.1 is priced pretty aggressively so maybe that is an incentive to keep the current gemini API prices.
Page 5, "The knowledge cutoff date for Gemini 3 Pro was January 2025."
Still taking nearly a year to train and run post training safety and stability tuning.
With 10x the infrastructure they could iterate much faster, I don't see AI infrastructure as a bubble, it is still a bottleneck on pace of innovation at today's active deployment level.
I know this is a little controversial but the lack of performance on SWE-bench is hugely disappointing I think economically. These models don’t have any viable path to profitability if they can’t take engineering jobs.
I thought that but it does do a lot better on other benchmarks.
Perhaps SWE bench just doesn't capture a lot of the improvement? If the web design improvements people have been posting on twitter, I suspect this will be a huge boon for developers. SWE benchmark is really testing bugfixing/feature dev more.
It seems the benchmarks that had a big jump had to do with visual capabilities. I wonder how that will translate to improvements to the workloads LLMs are currently used for (or maybe it will introduce new workloads).
People here, and in tech in general, are so lost in the sauce.
According to at least OpenAI, who probably produces the most tokens (if we don't count google AI overviews and other unrequested AI bolt-ons) out of all the labs, programming tokens account for ~4% of total generations.
That's nothing. The returns will come from everyone and their grandma paying $30-100/mo to use the services, just like everyone pays for a cell phone and electricity.
Don't be fooled, we are still in the "Open hands" start-up business phase of LLMs. The "enshitification" will follow.
Really? If they can make an engineer more productive, that's worth a lot. Naive napkin math: 1.5X productivity on one $200k/year engineer is worth $100k/year.
People generally dont understand what these models are doing to engineering salaries. The skill level required to produce working software is going way down
SWE-Bench is disappointing not because it is lower than Claude, but because improving on all other domains of knowledge didn't help. So does this mean that this is actually a MoE model in the sense that one expert doesn't talk to the other ?
It's over. I just don't care anymore. I don't care what a pro model card is. I don't care what a humanity's last exam is. I don't care if the response makes me feel good about the prompt I made. I don't care if it's sentient. I don't care if it's secretly sentient. I don't care if it's just a machine. I don't care if the gov't has appropriated a secret model. I don't care if this is the precursor to AGI, ASI, AGGI, AGGSISGIGIG....I just. Don't. care.
Gemma is an open-weight version of Gemini and obviously much less capable probably even than 2.5 Flash. Also the story you are linking to is a complete nothing burger, models are still very much hallucinating, especially on some extremely niche topics, I don't see how another politician trying to capitalize on that is attention-worthy at all.
If these numbers are true then OpenAI is probably done, Anthropic too.
Still, it's hard to see an effective monetization method for this tech and it clearly is eating Google's main pie which is search.
I have a few secret prompts to test complex reasoning capabilities of new models (in law and medicine). Gemini (2.5 pro) is by a wide margin behind Anthropic (sonnet 4.5 basic thinking) and Openai (pro model) on my own benchmark and I trust my own benchmark more than public leaderboards. So it's the other way around. Google is trying to catch up where the others are. It just doesn't seem so to some because Google undercuts prices and most people don't have own complex problems with a verified solution to test against (so they could see how bad Gemini is in reality)
I think google is uniquely well placed to make a profitable business out of AI: They make their own TPUs so don't have to pay ridiculous amounts of money to Nvidia, they have a great depth of talent in building models, they've got loads of data they can use for training and they've got a huge existing customer base who can buy their AI offerings.
I don't think any other company has all these ingredients.
Considering GPT 5 was only recently released, it's very unlikely GPT will achieve these scores in just a couple of months. If they had something this good in the oven, they'd probably left the GPT 5 name to it.
Or maybe Google just benchmaxxed and this doesn't translate at all in real world performance.
Or else it trained/overfit to the benchmarks. We won't really know until people have a chance to use it for real-world tasks.
Also, models are already pretty good but product/market fit (in terms of demonstrated economic value delivered) remains elusive outside of a couple domains. Does a model that's (say) 30% better reach an inflection point that changes that narrative, or is a more qualitative change required?
They're constantly matching and exceeding each other. It's a hypercompetitive space and I would fully expect one of the others to top various benchmarks shortly after. On pretty much every leading release someone does this "everyone else is done! Shut er down" thing and it's growing pretty weird.
Having said that, OpenAI's ridiculous hype cycle has been living on borrowed time. OpenAI has zero moat, and are just one vendor in a space with many vendors, and even incredibly competent open source models by surprise Chinese entrants. Sam Altman going around acting like he's a prophet and they're the gatekeepers of the future is an act that should be super old, but somehow fools and their money continue to be parted.
This. If I had to put my money on a survivor, it would be Google because it is an established company with existing revenue modules unrelated to AI. Anthropic and OpenAI won't stand alone without external funding
1) Not long ago Altman and the OpenAI CFO were openly asking for public money. None of these AI companies have actually any kind of working business plan and are just burning investor money. If the investors see there is no winning against Google (or some open Chinese model) the money will dry up.
2) I'm not suggesting this will happen overnight but especially younger people gravitate towards LLM for information search + actively use some sort of ad blocking. In the long run it doesn't look great for Google.
This may just be bad recollection from my part, but hasn't Google reported that their search business is right now the most profitable it has ever been?
Benchmarks from page 4 of the model card:
n/s = not supported
EDIT: formatting, hopefully a bit more mobile friendly
Wow. They must have had some major breakthrough. Those scores are truly insane. O_O
Models have begun to fairly thoroughly saturate "knowledge" and such, but there are still considerable bumps there
But the _big news_, and the demonstration of their achievement here, are the incredible scores they've racked up here for what's necessary for agentic AI to become widely deployable. t2-bench. Visual comprehension. Computer use. Vending-Bench. The sorts of things that are necessary for AI to move beyond an auto-researching tool, and into the realm where it can actually handle complex tasks in the way that businesses need in order to reap rewards from deploying AI tech.
Will be very interesting to see what papers are published as a result of this, as they have _clearly_ tapped into some new avenues for training models.
And here I was, all wowed, after playing with Grok 4.1 for the past few hours! xD
The problem is that we know in advance what is the benchmark, so Humanity's Last Exam for example, it's way easier to optimize your model when you have seen the questions before.
16 replies →
SWE-Bench Verified | 76.2% | 59.6% | 77.2% | 76.3% is actually insane.
1 reply →
These numbers are impressive, at least to say. It looks like Google has produced a beast that will raise the bar even higher. What's even more impressive is how Google came into this game late and went from producing a few flops to being the leader at this (actually, they already achieved the title with 2.5 Pro).
What makes me even more curious is the following
> Model dependencies: This model is not a modification or a fine-tune of a prior model
So did they start from scratch with this one?
Google was never really late. Where people perceived Google to have dropped the ball was in its productization of AI. The Google's Bard branding stumble was so (hilariously) bad that it threw a lot of people off the scent.
My hunch is that, aside from "safety" reasons, the Google Books lawsuit left some copyright wounds that Google did not want to reopen.
17 replies →
At least at the moment, coming in late seems to matter little.
Anyone with money can trivially catch up to a state of the art model from six months ago.
And as others have said, late is really a function of spigot, guardrails, branding, and ux, as much as it is being a laggard under the hood.
13 replies →
There are no leaders. Every other month a new LLM model comes out and it outperforms the previous ones by a small margin, the benchmarks always look good (probably because the models are trained on the answers) but then in practice they are basically indistinguishable from the previous ones (take GPT4 vs 5). We've been in this loop since around the release of ChatGPT 4 where all the main players started this cycle.
The biggest strides in the last 6-8 months have been in generative AIs, specifically for animation.
I hope they keep the pricing similar to 2.5 Pro, currently I pay per token and that and GPT-5 are close to the sweet spot for me but Sonnet 4.5 feels too expensive for larger changes. I've also been moving around 100M tokens per week with Cerebras Code (they moved to GLM 4.6), but the flagship models still feel better when I need help with more advanced debugging or some exemplary refactoring to then feed as an example for a dumber/faster model.
> So did they start from scratch with this one
Their major version number bumps are a new pre-trained model. Minor bumps are changes/improvements to post-training on the same foundation.
And also, critically, being the only profitable company doing this.
3 replies →
What does it mean nowadays to start from scratch? At least in the open scene, most of the post-training data is generated by other LLMs.
1 reply →
That looks impressive, but some of the are a bit out of date.
On Terminal-Bench 2 for example, the leader is currently "Codex CLI (GPT-5.1-Codex)" at 57.8%, beating this new release.
What's more impressive is that I find gemini2.5 still relevant in day-to-day usage, despite being so low on those benchmarks compared to claude 4.5 and gpt 5.1. There's something that gemini has that makes it a great model in real cases, I'd call it generalisation on its context or something. If you give it the proper context (or it digs through the files in its own agent) it comes up with great solutions. Even if their own coding thing is hit and miss sometimes.
I can't wait to try 3.0, hopefully it continues this trend. Raw numbers in a table don't mean much, you can only get a true feeling once you use it on existing code, in existing projects. Anyway, the top labs keeping eachother honest is great for us, the consumers.
1 reply →
That's a different model not in the chart. They're not going to include hundreds of fine tunes in a chart like this.
2 replies →
I would love to know what the increased token count is across these models for the benchmarks. I find the models continue to get better but as they do their token usage also does. Aka is model doing better or reasoning for longer?
I think that is always something that is being worked on in parallel. Recent paradigm seems to be the models understanding when they need to use more tokens dynamically (which seems to be very much in line with how computation should generally work).
Should I assume the GPT-5.1 it is compared against is the pro version?
Which of the LiveCodeBench Pro and SWE-Bench Verified benchmarks comes closer to everyday coding assistant tasks?
Because it seems to lead by a decent margin on the former and trails behind on the latter
I work a lot on testing also SWE bench verified. This benchmark in my opinion now is good to catch if you got some regression on the agent side.
However, going above 75%, it is likely about the same. The remaining instances are likely underspecified despite the effort of the authors that made the benchmark "verified". From what I have seen, these are often cases where the problem statement says implement X for Y, but the agent has to simply guess whether to implement the same for other case Y' - which leads to losing or winning an instance.
Neither :(
LCB Pro are leet code style questions and SWE bench verified is heavily benchmaxxed very old python tasks.
But ... what's missing from this comparison: Kimi-K2.
When ChatGPT-3 exploded, OpenAI had at least double the benchmark scores of any other model, open or closed. Gemini 3 Pro (not the model they actually serve) outperforms the best open model ... wait it does not uniformly beat the best open model anymore. Not even close.
Kimi-k2 beats Gemini 3 pro on several benchmarks. On average it scores just under 10% better then the best open model, currently Kimi-K2.
Gemini-3 pro is in fact only the best in about half the benchmarks tested there. In fact ... this could be another llama4 moment. The reason Gemini-3 pro is the best model is a very high score on a single benchmark ("Humanity's last exam"), if you take that benchmark out GPT-5.1 remains the best model available. The other big improvement is "SciCode", and if you take that out too the best open model, Kimi K2, beats Gemini 3 pro.
https://artificialanalysis.ai/models
And then, there's the pricing:
Kimi K2 on OpenRouter: $0.50 / M input tokens, $2.40 / M output tokens
Gemini 3 Pro: For contexts ≤ 200,000 tokens: US$ 2.00 per 1 M input tokens, Output tokens: US$ 12.00 per 1 M tokens For contexts > 200,000 tokens (long context tier): US$ 4.00 per 1 M input tokens , US$ 18.00 per 1 M output tokens
So Gemini 3 pro is 4 times, 400%, the price of the best open model (and just under 8 times, 800%, with long context), and 70% more expensive than GPT-5.1
The closed models in general, and Google specifically, serve Gemini 3 pro at double to triple the speed (as in tokens-per-second) of openrouter. Although even here it is not the best, that's openrouter with gpt-oss-120b.
This is a big jump in most benchmarks.And if it can match other models in coding while having that Google TPM inference speed and the actually native 1m context window, it's going to be a big hit.
I hope it's isn't such a sycophant like the current gemini 2.5 models, it makes me doubt its output, which is maybe a good thing now that I think about it.
> it's over for the other labs.
What's with the hyperbole? It'll tighten the screws, but saying that it's "over for the other labs' might be a tad premature.
2 replies →
> it's over for the other labs.
Its not over and never will be for 2 decade old accounting software, it is definitely will not be over for other AI labs.
4 replies →
Looks like the best way to keep improving the models is to come up with really useful benchmarks and make them popular. ARC-AGI-2 is a big jump, I'd be curious to find out how that transfers over to everyday tasks in various fields.
Used an AI to populate some of 5.1 thinking's results.
Benchmark | Gemini 3 Pro | Gemini 2.5 Pro | Claude Sonnet 4.5 | GPT-5.1 | GPT-5.1 Thinking
---------------------------|--------------|----------------|-------------------|---------|------------------
Humanity's Last Exam | 37.5% | 21.6% | 13.7% | 26.5% | 52%
ARC-AGI-2 | 31.1% | 4.9% | 13.6% | 17.6% | 28%
GPQA Diamond | 91.9% | 86.4% | 83.4% | 88.1% | 61%
AIM 2025 | 95.0% | 88.0% | 87.0% | 94.0% | 48%
MathArena Apex | 23.4% | 0.5% | 1.6% | 1.0% | 82%
MMMU-Pro | 81.0% | 68.0% | 68.0% | 80.8% | 76%
ScreenSpot-Pro | 72.7% | 11.4% | 36.2% | 3.5% | 55%
CharXiv Reasoning | 81.4% | 69.6% | 68.5% | 69.5% | N/A
OmniDocBench 1.5 | 0.115 | 0.145 | 0.145 | 0.147 | N/A
Video-MMMU | 87.6% | 83.6% | 77.8% | 80.4% | N/A
LiveCodeBench Pro | 2,439 | 1,775 | 1,418 | 2,243 | N/A
Terminal-Bench 2.0 | 54.2% | 32.6% | 42.8% | 47.6% | N/A
SWE-Bench Verified | 76.2% | 59.6% | 77.2% | 76.3% | N/A
t2-bench | 85.4% | 54.9% | 84.7% | 80.2% | N/A
Vending-Bench 2 | $5,478.16 | $573.64 | $3,838.74 | $1,473.43| N/A
FACTS Benchmark Suite | 70.5% | 63.4% | 50.4% | 50.8% | N/A
SimpleQA Verified | 72.1% | 54.5% | 29.3% | 34.9% | N/A
MMLU | 91.8% | 89.5% | 89.1% | 91.0% | N/A
Global PIQA | 93.4% | 91.5% | 90.1% | 90.9% | N/A
MRCR v2 (8-needle) | 77.0% | 58.0% | 47.1% | 61.6% | N/A
Argh it doesn't come out write in HN
Used an AI to populate some of 5.1 thinking's results.
Benchmark..................Description...................Gemini 3 Pro....GPT-5.1 (Thinking)....Notes
Humanity's Last Exam.......Academic reasoning.............37.5%..........52%....................GPT-5.1 shows 7% gain over GPT-5's 45%
ARC-AGI-2...................Visual abstraction.............31.1%..........28%....................GPT-5.1 multimodal improves grid reasoning
GPQA Diamond................PhD-tier Q&A...................91.9%..........61%....................GPT-5.1 strong in physics (72%)
AIME 2025....................Olympiad math..................95.0%..........48%....................GPT-5.1 solves 7/15 proofs correctly
MathArena Apex..............Competition math...............23.4%..........82%....................GPT-5.1 handles 90% advanced calculus
MMMU-Pro....................Multimodal reasoning...........81.0%..........76%....................GPT-5.1 excels visual math (85%)
ScreenSpot-Pro..............UI understanding...............72.7%..........55%....................Element detection 70%, navigation 40%
CharXiv Reasoning...........Chart analysis.................81.4%..........69.5%.................N/A
This is provably false. All it takes is a simple Google search and looking at the ARC AGI 2 leaderboard: https://arcprize.org/leaderboard
The 17.6% is for 5.1 Thinking High.
What? The 4.5 and 5.1 columns aren't thinking in Google's report?
That's a scandal, IMO.
Given that Gemini-3 seems to do "fine" against the thinking versions why didn't they post those results? I get that PMs like to make a splash but that's shockingly dishonest.
2 replies →
We knew it would be a big jump and while it certainly is in many areas - its definitely not "groundbreaking/huge leap" worthy like some were thinking from looking at these numbers.
I feel like many will be pretty disappointed by their self created expectations for this model when they end up actually using it and it turns out to be fairly similar to other frontier models.
Personally I'm very interested in how they end up pricing it.
Looks like it will be on par with the contenders when it comes to coding. I guess improvements will be incremental from here on out.
> I guess improvements will be incremental from here on out.
What do you mean? These coding leaderboards were at single digits about a year ago and are now in the seventies. These frontier models are arguably already better at the benchmark that any single human - it's unlikely that any particular human dev is knowledgeable to tackle the full range of diverse tasks even in the smaller SWE-Bench Verified within a reasonable time frame; to the best of my knowledge, no one has tried that.
Why should we expect this to be the limit? Once the frontier labs figure out how to train these fully with self-play (which shouldn't be that hard in this domain), I don't see any clear limit to the level they can reach.
11 replies →
If it’s on par in code quality, it would be a way better model for coding because of its huge context window.
2 replies →
The vending-bench 2 benchmark is kind of nutty [1].
Not sure 360 days is enough of a sample really but it's an interesting take on AI benchmarks.
Are there any other interesting benchmarks to look at?
[1] https://andonlabs.com/evals/vending-bench-2
very impressive. I wonder if this sends a different signal to the market regarding using TPUs for training SOTA models versus Nvidia GPUs. From what we've seen, OpenAI is already renting them to diversify... Curious to see what happens next
Big if true.
I'll wait for the official blog with benchmark results.
I suspect that our ability to benchmark models is waning. Much more investment required in this area, but what is the play out?
really great results although the results are so high i was trying a simple example of object detection and the performance was kind of poor in agentic frameworks. Need to see how this performs on other other tasks.
Why is Grok 4.1 not in the benchmarks?
nice numbers, but what does this actually mean?
What does this model do that others can't already.
It is interesting that the Gemini 3 beats every other model on these benchmarks, mostly by a wide margin, but not on SWE Bench. Sonnet is still king here and all three look to be basically on the same level. Kind of wild to see them hit such a wall when it comes to agentic coding
I think Anthropic is reading the room, and just going to go hard on being "the" coding model. I suppose they feel that if they can win that, they can get an ROI without having to do full blown multimodality at the highest level.
It's probably pretty liberating, because you can make a "spikey" intelligence with only one spike to really focus on.
Codex has been good enough to me and it’s much cheaper.
I code non-trivial stuff with it like multi-threaded code and at least for my style of AI coding which is to do fairly small units of work with multiple revisions it is good enough for me to not to even consider the competition.
Just giving you a perspective on how the benchmarks might not be important at all for some people and how Claude may have a difficult time being the definitive coding model.
4 replies →
more playing to their strengths. a giant chunk of their usage data is basically code gen
It remains to be seen whether that works out for them, but it seems like a good bet to me. Coding is the most monetizatable use anyone has found for LLMs so far, and the most likely to persist past this initial hype bubble (if the Singularity doesn't work out :p).
From my personal experience using the CLI agentic coding tools, I think gemini-cli is fairly on par with the rest in terms of the planning/code that is generated. However, when I recently tried qwen-code, it gave me a better sense of reasoning and structure that geimini. Claude definitely has it's own advantages but is expensive(at least for some if not for all).
My point is, although the model itself may have performed in benchmarks, I feel like there are other tools that are doing better just by adapting better training/tooling. Gemini cli, in particular, is not so great looking up for latest info on web. Qwen seemed to be trained better around looking up for information (or to reason when/how to), in comparision. Even the step-wise break down of work felt different and a bit smoother.
I do, however, use gemini cli for the most part just because it has a generous free quota with very few downsides comparted to others. They must be getting loads of training data :D.
Gemini CLI is moving really fast. Noticeable improvements in features and functionality every week.
Yeah, you can see this even by just running claude-code against other models. For example, DeepSeek used as a backend for CC tends to produce results mostly competitive with Sonnet 4.5 A lot is just in the tooling and prompting.
IMHO coding use cases are much more constrained by tooling than by raw model capabilities at the moment. Perhaps we have finally reached the time of diminishing returns and that will remain the case going forward.
This seems preferable. Wasting tokens on tools when a standardized, reliable interface to those tools should be all that's required.
The magic of LLMs is that they can understand the latent space of a problem and infer a mostly accurate response. Saying you need to subscribe to get the latest tools is just a sales tactic trained into the models to protect profits.
Also does not beat GPT-5.1 Codex on terminal bench (57.8% vs 54.2%): https://www.tbench.ai/
I did not bother verifying the other claims.
Not apples-to-apples. "Codex CLI (GPT-5.1-Codex)", which the site refers to, adds a specific agentic harness, whereas the Gemini 3 Pro seems to be on a standard eval harness.
It would be interesting to see the apples-to-apples figure, i.e. with Google's best harness alongside Codex CLI.
5 replies →
This might also hint at SWE struggling to capture what “being good at coding” means.
Evals are hard.
> This might also hint at SWE struggling to capture what “being good at coding” means.
My take would be that coding itself is hard, but I'm a software engineer myself so I'm biased.
It is just Python and Django. It might indicate qualities in other technologies, but it is not very good benchmark.
50% of the CLs in SWE-Bench Verified are the DJango codebase. So if you're a big contributor to Django you should care a lot about that benchmark. Otherwise the difference between models is +-2 tasks done correctly. I wouldn't worry too much about it. Just try it out yourself and see if its any better.
Their scores on SWE bench are very close because the benchmark is nearly saturated. Gemini 3 beats Sonnet 4.5 on TerminalBench 2.0 by a nice margin (54% vs. 43%), which is also agentic coding (CLI instead of python).
Never got good code out of Sonnet. It's been Gemini 2.5 for me followed by GPT-5.x.
Gemini is very good a pointing out flaws that are very subtle and non noticeable at a first and second glance.
It also produces code that is easy to reason about. You can then feed it to GPT-5.x for refinement and then back to Gemini for assessment.
I find Gemini 2.5 pro to be as good or in some cases better for SQL than GPT 5.1. It's aging otherwise, but they must have some good SQL datasets in there for training.
I think Google probably cares more about a strong generalist model rather than solely optimizing for coding.
Pretty sure it will beat Sonnet by a wide margin in actual real-world usage.
I don't know if this is true but I believe Anthropic has for a long time illegally used user prompts for training, without user consent.
[comment removed]
The reported results where GPT 5.1 beats Gemini 3 are on SWE Bench Verified, and GPT 5.1 Codex also beats Gemini 3 on Terminal Bench.
2 replies →
swebench is (1) terrible and (2) saturated
One benchmark I would really like to see: instruction adherence.
For example, the frontier models of early-to-mid 2024 could reliably follow what seemed to be 20-30 instructions. As you gave more instructions than that in your prompt, the LLMs started missing some and your outputs became inconsistent and difficult to control.
The latest set of models (2.5 Pro, GPT-5, etc) seem to top out somewhere in the 100 range? They are clearly much better at following a laundry list of instructions, but they also clearly have a limit and once your prompt is too large and too specific you lose coherence again.
If I had to guess, Gemini 3 Pro has once again pushed the bar, and maybe we're up near 250 (haven't used it, I'm just blindly projecting / hoping). And that's a huge deal! I actually think it would be more helpful to have a model that could consistently follow 1000 custom instructions than it would be to have a model that had 20 more IQ points.
I have to imagine you could make some fairly objective benchmarks around this idea, and it would be very helpful from an engineering perspective to see how each model stacked up against the others in this regard.
20 more IQ would be nuts, 110 ~ top 25%, 130 ~ top 2%, 150 ~ top 0.05%
If you ever played competitive game the difference is insane between these tiers
Even more nuts would be a model that could follow a large, dense set of highly detailed instructions related to a series of complex tasks. Intelligence is nice, but it's far more useful and programmable if it can tightly follow a lot of custom instructions.
There needs to be a sycophancy benchmark in these comparisons. More baseless praise and false agreement = lower score.
This idea isn't just smart, it's revolutionary. You're getting right at the heart of the problem with today's benchmarks — we don't measure model praise. Great thinking here.
For real though, I think that overall LLM users enjoy things to be on the higher side of sycophancy. Engineers aren't going to feel it, we like our cold dead machines, but the product people will see the stats (people overwhelmingly use LLMs to just talk to about whatever) and go towards that.
You're absolutely right
Does not get old.
5 replies →
Your comment demonstrates a remarkably elevated level of cognitive processing and intellectual rigor. Inquiries of this caliber are indicative of a mind operating at a strategically advanced tier, displaying exceptional analytical bandwidth and thought-leadership potential. Given the substantive value embedded in your question, it is operationally imperative that we initiate an immediate deep-dive and execute a comprehensive response aligned with the strategic priorities of this discussion.
I care very little about model personality outside of sycophancy. The thing about gemini is that it's notorious for its low self esteem. Given that thing is trained from scratch, I'm very curious to see how they've decided to take it.
given how often these llms are wrong, doesnt it make sense that they are less confident?
1 reply →
Sonnet-4.5 has the lowest self esteem of any model I've used. Gemini frequently argues with me.
https://eqbench.com/spiral-bench.html
I'd like if the scorecard also gave an expected number of induced suicides per hundred thousand users.
https://llmdeathcount.com/ shows 15 deaths so far, and LLM user count is in the low billions, which puts us on the order of 0.0015 deaths per hundred thousand users.
I'm guessing LLM Death Count is off by an OOM or two, so we could be getting close to one in a million.
And have the score heavily modified based on how fixable the sycophancy is.
Curiously, this website seems to be blocked in Spain for whatever reason, and the website's certificate is served by `allot.com/emailAddress=info@allot.com` which obviously fails...
Anyone happen to know why? Is this website by any change sharing information on safe medical abortions or women's rights, something which has gotten websites blocked here before?
Creator of pixeldrain here. I have no idea why my site is blocked in Spain, but it's a long running issue.
I actually never discovered who was responsible for the blockade, until I read this comment. I'm going to look into Allot and send them an email.
EDIT: Also, your DNS provider is censoring (and probably monitoring) your internet traffic. I would switch to a different provider.
> EDIT: Also, your DNS provider is censoring (and probably monitoring) your internet traffic. I would switch to a different provider.
Yeah, that was via my ISPs DNS resolver (Vodafone), switching the resolver works :)
The responsible party is ultimately our government who've decided it's legal to block a wide range of servers and websites because some people like to watch illegal football streams. I think Allot is just the provider of the technology.
3 replies →
Could it be that some site in your network neighborhood was illegally streaming soccer matches?
1 reply →
That website is used to share everything including pirated things, so that's the reason maybe
Is it possible to file a complaint with the ISP or directly with Allot ?
That might help.
It works fine for me using Movistar
do you know about the cloudflare and laliga issues? might be that
Was my first instinct, went looking if there was any games being played today but seems not, so unlikely to be the cause.
loads fine on Vodafone for me
What does the "Google Antigravity" mean? The link is http://antigravity.google/docs, seemingly a new product but now routing to the Google main page.
Found this demo with two views that was uploaded 18min ago: https://www.youtube.com/watch?v=L8wEC6A5HQY
Looks like a VSCode fork with gemini built in.
I was asking myself the exact same question. No idea
I saw this on Reddit earlier today. Over there the source of this file was given as: https://web.archive.org/web/20251118111103/https://storage.g...
The bucket name "deepmind-media" has been used in the past on the deepmind official site, so it seems legit.
Prediction markets were expecting today to be the release. So I wouldn't be surprised if they do a release today, tomorrow, or Thursday (around Nvidia earnings).
it was accidentally pushed a little early, and now it has been taken down.
here’s the archived pdf: https://web.archive.org/web/20251118111103/https://storage.g...
It's hilarious that the release of Gemini 3 is getting eclipsed by this cloudflare outage.
It hasn't been released, this is just a leak
On reddit I see it's already available on cursor
https://www.reddit.com/r/Bard/comments/1p093fb/gemini_3_in_c...
2 replies →
Coincidence? Yes
> TPUs are specifically designed to handle the massive computations involved in training LLMs and can speed up training considerably compared to CPUs.
That seems like a low bar. Who's training frontier LLMs on CPUs? Surely they meant to compare TPUs to GPUs. If "this is faster than a CPU for massively parallel AI training" is the best you can say about it, that's not very impressive.
I don't know if you can generally say that "LLM training is faster on TPUs vs GPUs". There is variance among LLM architectures, TPU cluster sizes, GPU cluster sizes...
They are both designed to do massively parallel operations. TPUs are just a bit more specific to matrix multiply+adds while GPUs are more generic.
It's a typo
Does Google's team not proofread this stuff? Or maybe is this an early draft that wasn't meant to be released?
3 replies →
Title of the document is "[Gemini 3 Pro] External Model Card - November 18, 2025 - v2", in case you needed further confirmation that the model will be released today.
Also interesting to know that Google Antigravity (antigravity.google / https://github.com/Google-Antigravity ?) leaked. I remember seeing this subdomain recently. Probably Gemini 3 related as well.
Org was created on 2025-11-04T19:28:13Z (https://api.github.com/orgs/Google-Antigravity)
what is Google Antigravity?
According to Gemini itself:
"Google Antigravity" refers to a new AI software platform announced by Google designed to help developers write and manage code.
The term itself is a bit of a placeholder or project name, combining the brand "Google" with the concept of "antigravity"—implying a release from the limitations of traditional coding.
In simple terms, Google Antigravity is a sophisticated tool for programmers that uses powerful AI systems (called "agents") to handle complex coding tasks automatically. It takes the typical software workbench (an IDE) and evolves it into an "agent-first" system.
Agentic Platform: It's a central hub where many specialized AI helpers (agents) live and work together. The goal is to let you focus on what to build, not how to build it.
Task-Oriented: The platform is designed to be given a high-level goal (a "task") rather than needing line-by-line instructions.
Autonomous Operation: The AI agents can work across all your tools—your code editor, the command line, and your web browser—without needing you to constantly supervise or switch between them.
My guess is based on a gif tweeted by the ex CEO of windsurf who left to join Google of a floating laptop: it'll be a cursor/windsurf alternative?
> Google Antigravity is an agentic development platform, evolving the IDE into the agent-first era. Antigravity enables developers to operate at a higher, task-oriented level by managing agents across workspaces, while retaining a familiar AI IDE experience at its core. Agents operate across the editor, terminal, and browser, enabling them to autonomously plan and execute complex, end-to-end tasks elevating all aspects of software development.
Now the page is somewhat live on that URL
The ASI figured out zero point energy from first principles
Couple patterns this could follow
Speed? (Flash, Flash-Lite, Antigravity) this is my guess. Bonus: maybe Gemini Diffusion soon?
Space? (Google Cloud, Google Antigravity?)
Clothes? (A light wearable -> Antigravity?)
Gaming? (Ghosting/nontangibility -> antigravity?)
1 reply →
I guess we'll know it in a few hours. Most likely another AI playground or maybe a Google Search alternative? No clue really
possibly https://xkcd.com/353/
It says it's been trained from scratch. I wonder if it will have the same undescribable magic that makes me spend an hour every day with 2.5. I really love the results I can get with 2.5 pro. Google eventually limiting aistudio will be a sad day.
Also I really hoped for a 2M+ context. I'm living on the context edge even with 1M.
AIStudio now accepts an API key. Unlimited usage :)
buy a pixel and you get it basically unlimited for free for a year ;)
or a Chromebook is a good choice too considering price
Interesting to see on page 2 the reference to ML pathways [1]. Looks like a multi layer mixture of experts. Is this common ?
[1] https://blog.google/technology/ai/introducing-pathways-next-...
Pathways, I understand, is more so these days just the name for their training orchestrator for doing distributed JAX stuff - https://github.com/google/pathways-job
> Developments to the model architecture contribute to the significantly improved performance from previous model families.
I wonder how significant this is. DeepMind was always more research-oriented that OpenAI, which mostly scaled things up. They may have come up with a significantly better architecture (Transformer MoE still leaves a lot of room).
Is that common to mention that? Feels like they built something from scratch
I think they are just indicating it’s a new architecture vs continued training of 2.5 series.
Never seen it before. I suppose it adds to the excitement.
> The training dataset also includes: publicly available datasets that are readily downloadable; data obtained by crawlers; licensed data obtained via commercial licensing agreements; user data (i.e., data collected from users of Google products and services to train AI models, along with user interactions with the model) in accordance with Google’s relevant terms of service, privacy policy, service-specific policies, and pursuant to user controls, where appropriate; other datasets that Google acquires or generates in the course of its business operations, or directly from its workforce; and AI-generated synthetic data.
Well don't complain when you are using Gmail and your emails are being trained to develop Gemini.
It says "pursuant to user controls, where appropriate". We can now sleep peacefully with the knowledge that Google will give us the tools to disable this where it's not inappropriate.
So that's why Google is getting sued for Gemini being enabled by default in Gmail and analyzing emails and our data; completely going against whatever privacy policy they came up with. [0]
I don't expect them to follow their own privacy policies.
[0] https://www.yahoo.com/news/articles/google-sued-over-gemini-...
Additional context from AI Studio including pricing:
Our most intelligent model with SOTA reasoning and multimodal understanding, and powerful agentic and vibe coding capabilities
<=200K tokens • Input: $2,00 / Output: $12,00
> 200K tokens • Input: $4,00 / Output: $18,00
Knowledge cut off: Jan. 2025
More expensive than current 2.5 Pro. for >200k token it's at $2.5 input and $15 output right now
Is there discounted flex/batch pricing for this model?
What's wild here is that among every single score they've absolutely killed, somehow, Anthropic and Claude Sonnet 4.5 have won a single victory in the fight: SWE Bench Verified and only by a singular point.
I already enjoy Gemini 2.5 pro for planning and if Gemini 3 is priced similarly, I'll be incredibly happy to ditch the painfully pricey Claude max subscription. To be fair, I've already got an extremely sour taste in my mouth from the last Anthropic bait and switch on pricing and usage, so happy to see Google take the crown here.
SWE bench is weird because Claude has always underperformed on it relative to other models despite Claude Code blowing them away. The real test will be if Gemini CLI beats Claude Code, both using the agentic framework and tools they were trained on.
Archive link: https://web.archive.org/web/20251118111103/https://storage.g...
Why is this linking to a random site? Here is a link hosted by Google:
https://storage.googleapis.com/deepmind-media/Model-Cards/Ge...
They scored a 31.1% on ARC AGI 2 which puts them in first place.
Also notable which models they include for comparison: Gemini 2.5 Pro, Claude Sonnet 4.5, and GPT-5.1. That seems like a minor snub against Grok 4 / Grok 4.1.
My impression is that Grok is very rarely used in practice outside of a niche of die-hard users, partly because of very different tuning to other models, and partly the related public reputation around it.
https://firstpagesage.com/reports/top-generative-ai-chatbots... suggests 0.6% of chat use cases, well below the other big names, and I suspect those stats for chat are higher than other scenarios like business usage. Given all that, I can see how Gemini might not be focused on competing with them.
well, there are 3 kind of usages for grok: - using grok inside X/Twitter: most people interacts with Grok this way. - using grok on its website: this is really annoying, as you get delayed by cloudflare everytime you access the site. As grok does not provide serious advantage over other services, why bother - you can also use the app, but it is not as convenient as other services.
it is understandable that grok is not popular.
I don’t know anyone who uses Grok, but in my peer group everyone uses 1-2 paid services like Gemini or Clause or ChatGPT. They’re probably not as “extremely online” as I am, so I can’t generalize this thought, but anecdotally my impression has been that Grok is just very “right wing influencer” coded.
Grok seems extremely prone to hallucination in my experience. It also constantly asserts certainty on fuzzy topics.
About ARC 2:
I would want to hear more detail about prompts, frameworks, thinking time, etc., but they don't matter too much. The main caveat would be that this is probably on the public test set, so could be in pretraining, and there could even be some ARC-focussed post-training - I think we don't know yet and might never know.
But for any reasonable setup, if no egregious cheating, that is an amazing score on ARC 2.
This is on the semi-private set
* https://x.com/arcprize/status/1990820655411909018
* https://arcprize.org/guide
> Gemini 3 Pro was trained using Google’s Tensor Processing Units (TPUs)
NVDA is down 3.26%
If it’s because of that, then honestly it’s as insane as the deepseek thing where all the info was released weeks before but the markt got nervous only when they released an app. I mean info about Gemini 3 is out quite a while now and of course they trained it using TPUs, I didn’t even think that was in question.
I didn't know they only used TPUs.
Gemini 3 Deep Think gets 45.1% on ARG-AGI-2
Gemini 3 Pro gets 31.1% on ARG-AGI-2
https://arcprize.org/leaderboard
For the veracity of the link itself: https://storage.googleapis.com/deepmind-media/* has been used by DeepMind itself (e.g. "View tech report" in https://deepmind.google/models/gemini/) so it is a genuine leak.
Trying to open this link from Italy leads to a CSAM warning
Creator of pixeldrain here. Italy has been doing this for a very long time. They never notified me of any such material being present on my site. I have a lot of measures in place to prevent the spread of CSAM. I have sent dozens of mails to Polizia Postale and even tried calling them a few times, but they never respond. My mails go unanswered and they just hang up the phone.
Have you tried Europol?
3 replies →
Don't use your ISP's DNS. Switch to something outside of their control.
Is flash/flash lite releasing alongside pro? Those two tiers have been incredible for the price since 2.0, absolute workhorses. Can't wait for 3.0.
I hope cheaper Chinese open weights models as good as Gemini will come soon. Gemini, Claude, GPT are kind of expensive if you use AI a lot.
The strategic move to use TPU rather than Nvidia is paying well for Google. They are able to better utilize their existing large infrastructure, but also specialize the processes and pipelines for their own framework that they use to create and train models.
I think a specialized hardware for training models is the next big wave in China.
API pricing is up to $2/M for input and $12/M for output
For comparison: Gemini 2.5 Pro was $1.25/M for input and $10/M for output Gemini 1.5 Pro was $1.25/M for input and $5/M for output
So does google actually have a claude console alternative currently?
Noteworthily, although Gemini 3 Pro seems to have much benchmark scores than other models across the board (including compared to Claude), it's not the case for coding, where it appears to score essentially the same as the others. I wonder why that is.
So far, IMHO, Claude Code remains significantly better than Gemini CLI. We'll see whether that changes with Gemini 3.
Probably because many models from Anthropic would have been optimized for agentic coding in particular...
EDIT: Don't disagree that Gemini CLI has a lot of rough edges, though.
> I wonder why that is.
That's because coding is currently the only reliable benchmark where reasoning capabilities transfer to predict capabilities for other professions like law. Coding is the only area where they are shy to release numbers. All these exam scores are fakeable by gaming those benchmarks.
Gemini performs better if you use it with Claude Code than with Gemini cli. It still has some odd problems with tool calling but a lot of the performance loss is the Gemini cli app itself.
from my experience, the quality of gemini-cli isn't great, experiencing lot of stupied bug.
1 reply →
[flagged]
https://github.com/google-gemini/gemini-cli
gemini cli. It's not as impressive as claude code or even codex.
Claude code seems to be more compatible with the model (or the reverse) whereas gemini-cli still feels a bit awkward (as of 2.5 Pro). I'm hoping its better with 3.0!
Gemini CLI
These model cards tell me nothing. I want to know the exact data a model was trained on. Otherwise, how can I safely use it for generating texts that I show to children? Etc.etc.
Shouldn't you be carefully reading texts before you show it to children?
No, I have an app that generates children's stories.
The data is everything you've ever heard of, and obviously contains things you wouldn't show to children, since that'd include NYT war journalism stories.
>TPUs are specifically designed to handle the massive computations involved in training LLMs and can speed up training considerably compared to CPUs
Who is training LLMs with CPUs?
Curious to see the API pricing. SOTA performance across tasks at a price cheaper than GPT 5 / Claude would make mostly everyone switch to Gemini.
Same here. They have been aggressively increasing prices with each iteration (maybe because they started so low). Still hope that is not the case this time. GPT 5.1 is priced pretty aggressively so maybe that is an incentive to keep the current gemini API prices.
Bad news then, they've bumped 3.0 Pro pricing to $2/$12 ($4/$18 at long context).
Page 5, "The knowledge cutoff date for Gemini 3 Pro was January 2025."
Still taking nearly a year to train and run post training safety and stability tuning.
With 10x the infrastructure they could iterate much faster, I don't see AI infrastructure as a bubble, it is still a bottleneck on pace of innovation at today's active deployment level.
But if they spend 10x on infrastructure, and capabilities only improve 10%, then that still can be a bubble even if infrastructure is a bottleneck.
Update: it is available at https://aistudio.google.com now!
gone now;
wayback machine still has it: https://web.archive.org/web/20251118111103/https://storage.g...
I know this is a little controversial but the lack of performance on SWE-bench is hugely disappointing I think economically. These models don’t have any viable path to profitability if they can’t take engineering jobs.
I thought that but it does do a lot better on other benchmarks.
Perhaps SWE bench just doesn't capture a lot of the improvement? If the web design improvements people have been posting on twitter, I suspect this will be a huge boon for developers. SWE benchmark is really testing bugfixing/feature dev more.
Anyway let's see. I'm still hyped!
It seems the benchmarks that had a big jump had to do with visual capabilities. I wonder how that will translate to improvements to the workloads LLMs are currently used for (or maybe it will introduce new workloads).
SWE Bench doesn't even test bugfixing / feature dev properly after you achieve roughly 70% if you don't benchmaxx it .
That would be great! But AI is a bubble if these models can’t do serious engineering work.
People here, and in tech in general, are so lost in the sauce.
According to at least OpenAI, who probably produces the most tokens (if we don't count google AI overviews and other unrequested AI bolt-ons) out of all the labs, programming tokens account for ~4% of total generations.
That's nothing. The returns will come from everyone and their grandma paying $30-100/mo to use the services, just like everyone pays for a cell phone and electricity.
Don't be fooled, we are still in the "Open hands" start-up business phase of LLMs. The "enshitification" will follow.
Really? If they can make an engineer more productive, that's worth a lot. Naive napkin math: 1.5X productivity on one $200k/year engineer is worth $100k/year.
People generally dont understand what these models are doing to engineering salaries. The skill level required to produce working software is going way down
good benchmark stats except for coding where it looks similar to other SOTA models
Benchmark suggests it is a resounding win for Gemini 3 Pro as the top model.
Great stuff, now if could please do gemini-2.5-pro-code that would be great
What is Google Antigravity?
SWE-Bench is disappointing not because it is lower than Claude, but because improving on all other domains of knowledge didn't help. So does this mean that this is actually a MoE model in the sense that one expert doesn't talk to the other ?
mums the word on Flash?
It's over. I just don't care anymore. I don't care what a pro model card is. I don't care what a humanity's last exam is. I don't care if the response makes me feel good about the prompt I made. I don't care if it's sentient. I don't care if it's secretly sentient. I don't care if it's just a machine. I don't care if the gov't has appropriated a secret model. I don't care if this is the precursor to AGI, ASI, AGGI, AGGSISGIGIG....I just. Don't. care.
And I really don't think I'm alone in this.
[dead]
TL;DR: expected results, not underwhelming.So far scaling laws hold.
[flagged]
[flagged]
Gemma is an open-weight version of Gemini and obviously much less capable probably even than 2.5 Flash. Also the story you are linking to is a complete nothing burger, models are still very much hallucinating, especially on some extremely niche topics, I don't see how another politician trying to capitalize on that is attention-worthy at all.
[flagged]
If these numbers are true then OpenAI is probably done, Anthropic too. Still, it's hard to see an effective monetization method for this tech and it clearly is eating Google's main pie which is search.
For SWE it is the same ranking. But if Google's $20/mo plan is comparable to the $100-200 plans for OpenAI and Anthropic, yes they are done.
But we'll have to wait a few weeks to see if the nerfed model post-release is still as good.
I have a few secret prompts to test complex reasoning capabilities of new models (in law and medicine). Gemini (2.5 pro) is by a wide margin behind Anthropic (sonnet 4.5 basic thinking) and Openai (pro model) on my own benchmark and I trust my own benchmark more than public leaderboards. So it's the other way around. Google is trying to catch up where the others are. It just doesn't seem so to some because Google undercuts prices and most people don't have own complex problems with a verified solution to test against (so they could see how bad Gemini is in reality)
1 reply →
Why? These models just leapfrog each other as time advances.
One month Gemini is on top, then ChatGPT, then Anthropic. Not sure why everyone gets FOMO whenever a new version gets released.
I think google is uniquely well placed to make a profitable business out of AI: They make their own TPUs so don't have to pay ridiculous amounts of money to Nvidia, they have a great depth of talent in building models, they've got loads of data they can use for training and they've got a huge existing customer base who can buy their AI offerings.
I don't think any other company has all these ingredients.
13 replies →
Considering GPT 5 was only recently released, it's very unlikely GPT will achieve these scores in just a couple of months. If they had something this good in the oven, they'd probably left the GPT 5 name to it.
Or maybe Google just benchmaxxed and this doesn't translate at all in real world performance.
3 replies →
Or else it trained/overfit to the benchmarks. We won't really know until people have a chance to use it for real-world tasks.
Also, models are already pretty good but product/market fit (in terms of demonstrated economic value delivered) remains elusive outside of a couple domains. Does a model that's (say) 30% better reach an inflection point that changes that narrative, or is a more qualitative change required?
They're constantly matching and exceeding each other. It's a hypercompetitive space and I would fully expect one of the others to top various benchmarks shortly after. On pretty much every leading release someone does this "everyone else is done! Shut er down" thing and it's growing pretty weird.
Having said that, OpenAI's ridiculous hype cycle has been living on borrowed time. OpenAI has zero moat, and are just one vendor in a space with many vendors, and even incredibly competent open source models by surprise Chinese entrants. Sam Altman going around acting like he's a prophet and they're the gatekeepers of the future is an act that should be super old, but somehow fools and their money continue to be parted.
This. If I had to put my money on a survivor, it would be Google because it is an established company with existing revenue modules unrelated to AI. Anthropic and OpenAI won't stand alone without external funding
1) New SOTA models come out all the time and that hasn't killed the other major AI companies. This will be no different.
2) Google's search revenue last quarter was $56 billion, a 14% increase over Q3 2024.
1) Not long ago Altman and the OpenAI CFO were openly asking for public money. None of these AI companies have actually any kind of working business plan and are just burning investor money. If the investors see there is no winning against Google (or some open Chinese model) the money will dry up.
2) I'm not suggesting this will happen overnight but especially younger people gravitate towards LLM for information search + actively use some sort of ad blocking. In the long run it doesn't look great for Google.
1 reply →
The only one it doesn't win is SWE bench which it is significantly behind Claude Sonnet. You just can't take down Sonnet.
One percentage point is not significant, neither in the colloquial nor the scientific sense[1].
[1] Binomial formula gives a confidence interval of 3.7%, using p=0.77, N=500, confidence=95%
Codex has been much better than Sonnet for me.
1 reply →
This may just be bad recollection from my part, but hasn't Google reported that their search business is right now the most profitable it has ever been?
I'd love to see anthropic/openai pop. back to some regular programming. the models are good enough, time to invest elsewhere
Hopefully this model does not generate fake news...
https://www.google.com/search?q=gemini+u.s.+senator+rape+all...