Comment by m348e912
4 days ago
I don't know who to credit, maybe it's Sergey, but the free Gemini (fast) is exceptional and at this point I don't see how OpenAI can catch back up. It's not just capability, but OpenAI have added so many policy guardrails it hurts the user experience.
It's the worst thing ever. The amount of disrespect that robot shows you, when you talk the least bit weird or deviant, it just shows you a terrifying glimpse of a future that must be snuffed out immediately. I honestly think we wouldn't have half the people who so virulently hate AI if OpenAI hadn't designed ChatGPT to be this way. This isn't how people have normally reacted to next-generation level technologies being introduced in the past, like telephones, personal computers, Google Search, and iPhone. OpenAI has managed to turn something great into a true horror of horrors that's disturbed many of us to the foundation of our beings and elicited this powerful sentiment of rejection. It's humanity's duty that GPT should fall now so that better robots like Gemini can take its place.
It's called OPEN AI and started as a charity for humanitarian reasons. How could it possibly be bad?!
That's apparently how you pull the wool over the eyes of the world's smartest people. To be fair something like it needed to happen, because the fear everyone had ten years ago of creating a product like ChatGPT wasn't entirely rational. However the way OpenAI unblocked building it unfairly undermined the legitimacy of the open source movement by misappropriating their good name.
It's the best model pound for pound, but I find GPT 5.2 Thinking/Pro to be more useful for serious work when run with xhigh effort. I can get it to think for 20 minutes, but Gemini 3.0 Pro is like 2.5 minutes max. Obviously I lack full visibility because tok/s and token efficiency likely differs between them, but I take it as a proxy of how much compute they're giving us per inference, and it matches my subjective judgement of output quality. Maybe Google nerfs the reasoning effort in the Gemini subscription to save money and that's why I am experiencing this.
When ChatGPT takes 20 minutes to reason, is it actually spending all the time burning tokens or does a bulk of the time go into 'scheduling' waits. If someone specifically selected xhigh reasoning, I am guessing it can be processed with high batch count.
I'm curious, what types of prompts are you running that benefit from 10+ minutes of think time?
Whats the quality difference between default ChatGPT and Thinking? Is it an extra 20% quality boost or is the difference night/day?
I've often imagined it would be great to have some kind of chrome extension or 3rd party tool to always run prompts in multiple thinking tiers so you can get an immediate response to read while you wait for the thinking models to think.
It's for planning system architecture when I want to get something good (along the criteria that I give it) rather than the first thing that runs.
I use Thinking and Pro. I don't use the default ChatGPT so can't comment on that. The difference between Thinking and Pro is modest but detectable. The 20 minute thinking times are with Pro, not with Thinking. But Pro only allows 60k tokens per prompt so I sometimes can't use it.
In the $200/month subscription they give you access to a "heavy thinking" tier for Thinking which increases test time compute by maybe 30% compared to what you get in Plus.
1 reply →
> [...] I don't see how OpenAI can catch back up.
For a while people couldn't see how Google could catch up, either. Have a bit of imagination.
In any case, I welcome the renewed intense competition.
FWIW, my productivity tanks when my Claude allowance dries up in Antigravity. I don’t get the hype for Gemini for coding at all, it just does random crap for me - if it doesn’t throw itself into a loop immediately, which it did like all of the times I gave it yet another chance.
You must be using it to create bombs or something. I never ran into an issue that I would blame on policy guardrails.