Comment by bobjordan
2 days ago
I got frustrated with the new o3-pro mode today. I just wasted a few hours of my day waiting 15-20 minutes for answers that were totally out of line with the workflow I've had since the first o1-pro model came out. It's a completely different beast to work with. It feels like it hits output limits way easier, and you have to work around it. Today after I finally gave up, I just told the model I was disappointed and asked it to explain its limitations. It was actually helpful, and told me I could ask for a download link to get a file that wasn't cut off. But why should I have to do that? It's definitely not more user-friendly and totally the opposite experience as working with Google Gemini 2.5 pro. Honestly, this experience made it obvious how much harder OpenAI's models are to work with now compared to Google's. I've been using Gemini 2.5 Pro and it's super hard to find its limits. For the $20 I spend, it's not even a competition anymore. My new workflow is clear: throw everything at Gemini 2.5 Pro to get the real work done, then maybe spot-check it with the OpenAI models. I'll probably just migrate to the top Gemini ultra tier when the “deep thinking” mode is available. I'm just not happy with the openai experience on any of their models after getting used to the huge context window in Gemini. OpenAI used to at least keep me happy with o1-pro but now that they removed it and o3-pro kind of sucks to work with taking 20 minutes to output and have lower confidence in the time spent, I don’t think I have a reason to default to them anymore. Gemini is definitely more user friendly and my default option now.
What seems clear is there is no consensus. Gemini 2.5 Pro just seems consistently worse to me, but I’ve seen others sing its praises. This might be more like iPhone vs Android than a true stack ranking of models.
Sometimes it's great, sometimes it's not. Depends on the tools you're using too, I guess. Like when using Roo-Code, Gemini 2.5 Pro still gets confused by the wonky diff format Roo-Code wants it to use. It'll keep messing up simple edits, and if it happens once, it'll happen again and again, cause it's multi-shotting itself to make mistakes.
I don't have that with Claude-Code, it just keeps on chugging along.
One big difference there though: I got the Claude-Code Pro Max plan (or whatever it's called). I now no longer have to worry about the cost since it's a monthly flat-fee, so if it makes a mistake it doesn't make me angry, since the mistake didn't cost me 5 euros.
I am using an MCP server that adds Gemini & O3 to Claude-Code, so Claude-Code can ask them for assistance here and there, and in this Gemini 2.5 Pro has been such a great help. Especially because its context size is so much larger, it can take in a lot more files than Claude can, so it's better at spotting mistakes.
It depends on the task. Claude 4 is better at coding (haven't tried claude code, just sonnet, but you can tell). However when it comes to using an LLM to develop your thoughts (philosophy/literary criticism), I found Gemini (2.5 pro) to be better. A few days ago I was trying to get Claude to reformulate what I had said in a pretty long conversation, and it was really struggling. I copy-pasted the whole conversation into Gemini and asked it to take over. It absolutely nailed it in one shot.
I found all recent models to be "good enough" for my use (coding assistance). I've settled on just using Claude 4. At the same time the experience also makes me less worried about this tech making programmers obsolete...
Gemini 2.5 pro has been consistently excellent for me, when it works. It sometimes just spins and spins with no results but when it comes with something, it has been pretty good.
I find o3’s coding output is just wonderful. It’s tidy, thoughtful, well commented. But if I need to grok an entire repo to ask a complex question, I paste it all into Gemini 2.5 Pro. Simply wonderful.
By "output limits" do you mean the context window?
Mococoa drink, all natural cocoa beans from the upper slopes of Mount Nicaragua. No artificial sweeteners