Comment by cmiles8
2 days ago
A bit of PR puffery, but it is fair to say that between Gemini and others it’s now been clearly demonstrated that OpenAI doesn’t have any clear moat.
2 days ago
A bit of PR puffery, but it is fair to say that between Gemini and others it’s now been clearly demonstrated that OpenAI doesn’t have any clear moat.
Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
That means responses can be far more tailored - it knows what your job is, knows where you go with friends, knows that when you ask about 'dates' you mean romantic relationships and which ones are going well or badly not the fruit, etc.
Eventually when they make it work better, open ai can be your friend and confident, and you wouldn't dump your friend of many years to make another new friend without good reason.
I really think this memory thing is overstated on Hacker News. This is not something that is hard to move at all. It's not a moat. I don't think most users even know memory exist outside of a single conversation.
Every single one of my non-techie friends who use ChatGPT rely heavily on memory. Whenever they try something different to it, they get very annoyed that it just doesn't "get them" or "know them".
Perhaps it'll be easy to migrate memories indeed (I mean there are already plugins that sort of claim to do it, and it doesn't seem very hard), but it certainly is a very differentiating feature at the moment.
I also use ChatGPT as my daily "chat LLM" because of memory, and, especially, because of the voice chat, which I still feel is miles better than any competition. People say Gemini voice chat is great, but I find it terrible. Maybe I'm on the wrong side of an A/B test.
6 replies →
I dislike that it has a memory.
It creeps me out when a past session poisons a current one.
6 replies →
It doesn't even change the responses a lot. I used ChatGPT for a year for a lot of personal stuff, and tried a new account with basic prompts and it was pretty much the same. Lots of glazing.
What kind of a moat is that? I think it only works in abusive relationships, not consumer economies. Is OpenAIs model being an abusive money grubbing partner? I suppose it could be!
If you have all your “stuff” saved on ChatGPT, you’re naturally more likely to stay there, everything else being more or less equal: Your applications, translations, market research . . .
2 replies →
But Google has your Gmail inbox, your photos, your maps location history…
I think an OpenAI paper showed 25% of GPT usage is “seeking information”. In that case Google also has a an advantage from being the default search provider on iOS and Android. I do find myself using the address bar in a browser like a chat box.
https://cdn.openai.com/pdf/a253471f-8260-40c6-a2cc-aa93fe9f1...
> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider
This sounds like first-mover advantage more than a moat.
The memory is definitely sort of a moat. As an example, I'm working on a relatively niche problem in computer vision (small, low-resolution images) and ChatGPT now "knows" this and tailors its responses accordingly. With other chatbots I need to provide this context every time else I get suggestions oriented towards the most common scenarios in the literature, which don't work at all for my use-case.
That may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now. I asked ChatGPT to roast me again at the end of last year, and I was a bit taken aback that it had even figured out the broader problem I'm working on and the high level approach I'm taking, something I had never explicitly mentioned. In fact, it even nailed some aspects of my personality that were not obvious at all from the chats.
I'm not saying it's a deep moat, especially for the less frequent users, but it's there.
4 replies →
You can prompt the model to dump all of the memory into a text file and import that.
In the onboarding flow, I can ask you, "Do you use another LLM?" If so, give it this prompt and then give me the memory file that outputs.
> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
Branding isn't a moat when, as far as the mass market is concerned, you are 2 years old.
Branding is a moat when you're IBM, Microsoft (and more recently) Google, Meta, etc.
It's certainly valuable but you can ask Digg and MySpace how secure being the first mover is. I can already hear my dad telling me he is using Google's ChatGPT...
> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.
Their 'memory' is mostly unhelpful and gets in the way. At best it saves you from prompting some context, but more often than not it adds so much irrelevant context that it over fits responses so hard that it makes them completely useless, specially in exploratory sessions.
I just learned Gemini has "memory" because it mixed its response to a new query with a completely unrelated query I had beforehand, despite making separate chats for them. It responded as if they were the same chat. Garbage.
Its a recent addition. You can view them in some settings menu. Gemini also has scheduled triggers like "Give me a recap of the daily news every day at 9am based on my interests" and it will start a new chat with you every day at 9am with that content.
I recently discovered that if a sentence starts with "remember", Gemini writes the rest of it down as standing instructions. Maybe go look in there and see if there is something surprising.
Couldn't you just ask it to write down what it knows about you and copy paste into another provider?
The next realization will be that Claude isn't clearly(/any?) better than Google's coding agents.
Claude is cranked to the max for coding and specifically agentic coding and even more specifically agentic coding using Claude Code. It's like the macbook of coding LLMs.
Claude Code + Opus 4.5 is an order of magnitude better than Gemini CLI + Gemini 3 Pro (at least, last time I tried it).
I don't know how much secret sauce is in CC vs the underlying model, but I would need a lot of convincing to even bother with Gemini CLI again.
That hasn’t been my experience. I agree Opus has the edge but it’s not by that much and I still sometimes get better results from Gemini, especially when debugging issues.
Claude Code is much better than Gemini CLI though.
I think Gemini 3.0 the model is smarter than Opus 4.5, but Claude Code still gives better results in practice than Gemini CLI. I assume this is because the model is only half the battle, and the rest is how good your harness and integration tooling are. But that also doesn't seem like a very deep moat, or something Google can't catch up on with focused attention, and I suspect by this time next year, or maybe even six months from now, they'll be about the same.
> But that also doesn't seem like a very deep moat, or something Google can't catch up on with focused attention, and I suspect by this time next year, or maybe even six months from now, they'll be about the same.
The harnessing in Google's agentic IDE (Antigravity) is pretty great - the output quality is indistinguishable between Opus 4.5 and Gemini 3 for my use cases[1]
1. I tend to give detailed requirements for small-to-medium sized tasks (T-shirt sizing). YMMV on larger, less detailed tasks.
If the bubble doesn't burst in the next few days, then this is clearly wrong.
Next few days? Might be a bit longer than that.
Why? They said "clearly demonstrated".
If it is so clear, then investors will want to pull their money out.
2 replies →
Out of curiosity, why that specific timeframe? is there a significant unveiling supposed to happen? Something CES related?