Comment by europeanNyan

17 hours ago

After they pushed the limits on the Thinking models to 3000 per week, I haven't touched anything else. I am really satisfied with their performance and the 200k context windows is quite nice.

I've been using Gemini exclusively for the 1 million token context window, but went back to ChatGPT after the raise of the limits and created a Project system for myself which allows me to have much better organization with Projects + only Thinking chats (big context) + project-only memory.

Also, it seems like Gemini is really averse to googling (which is ironic by itself) and ChatGPT, at least in the Thinking modes loves to look up current and correct info. If I ask something a bit more involved in Extended Thinking mode, it will think for several minutes and look up more than 100 sources. It's really good, practically a Deep Research inside of a normal chat.

I REALLY struggle with Gemini 3 Pro refusing to perform web searches / getting combative with the current date. Ironically their flash model seems much more likely to opt for web search for info validation.

Not sure if others have seen this...

I could attribute it to:

1. It's known quantity with the pro models (I recall that the pro/thinking models from most providers were not immediately equipped with web search tools when they were released originally)

2. Google wants you to pay more for grounding via their API offerings vs. including it out of the box

  • when I want it to google stuff, I just use the deep research mode. Not as instant, but it googles a lot of stuff then

  • Sample of one here, but I get the exact opposite behavior. Flash almost never wants to search and I have to use Pro.

I find Gemini does the most searching (and the quickest... regularly pulls 70+ search results on a query in a matter of seconds - likely due to googlebot's cache of pretty much every page). Chatgpt seems to only search if you have it in thinking/research mode now.