← Back to context

Comment by londons_explore

2 days ago

Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.

That means responses can be far more tailored - it knows what your job is, knows where you go with friends, knows that when you ask about 'dates' you mean romantic relationships and which ones are going well or badly not the fruit, etc.

Eventually when they make it work better, open ai can be your friend and confident, and you wouldn't dump your friend of many years to make another new friend without good reason.

I really think this memory thing is overstated on Hacker News. This is not something that is hard to move at all. It's not a moat. I don't think most users even know memory exist outside of a single conversation.

  • Every single one of my non-techie friends who use ChatGPT rely heavily on memory. Whenever they try something different to it, they get very annoyed that it just doesn't "get them" or "know them".

    Perhaps it'll be easy to migrate memories indeed (I mean there are already plugins that sort of claim to do it, and it doesn't seem very hard), but it certainly is a very differentiating feature at the moment.

    I also use ChatGPT as my daily "chat LLM" because of memory, and, especially, because of the voice chat, which I still feel is miles better than any competition. People say Gemini voice chat is great, but I find it terrible. Maybe I'm on the wrong side of an A/B test.

    • This feels like an area Google would have an advantage though. Look at all of the data about you that Google has and it could mine across Wallet, Maps, Photos, Calendar, GMail, and more. Google knows my name, address, drivers license, passport, where I work, when I'm home, what I'm doing tomorrow, when I'm going on vacation and where I'm going, and whole litany of other information.

      The real challenge for Google is going to be using that information in a privacy-conscious way. If this was 2006 and Google was still a darling child that could do no wrong, they'd have already integrated all of that information and tried to sell it as a "magical experience". Now all it'll take is one public slip-up and the media will pounce. I bet this is why they haven't done that integration yet.

      2 replies →

  • I dislike that it has a memory.

    It creeps me out when a past session poisons a current one.

    • Exactly. I went through a phase of playing around with ESP32s and now it tries to steer every prompt about anything technology or electronics related back to how it can be used in conjunction with a microcontroller, regardless of how little sense it makes.

    • I agree. For me it's annoying because everything it generates is too tailored to the first stuff I started chatting with it about. I have multiple responsibilities and I haven't been able to get it to compartmentalize. When I'm wearing my "radiology research" support hat it assumes I'm also wearing my "MRI physics" hat and to weaves everything for MRI. It's really annoying.

      2 replies →

  • It doesn't even change the responses a lot. I used ChatGPT for a year for a lot of personal stuff, and tried a new account with basic prompts and it was pretty much the same. Lots of glazing.

What kind of a moat is that? I think it only works in abusive relationships, not consumer economies. Is OpenAIs model being an abusive money grubbing partner? I suppose it could be!

  • If you have all your “stuff” saved on ChatGPT, you’re naturally more likely to stay there, everything else being more or less equal: Your applications, translations, market research . . .

It's certainly valuable but you can ask Digg and MySpace how secure being the first mover is. I can already hear my dad telling me he is using Google's ChatGPT...

> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider

This sounds like first-mover advantage more than a moat.

  • The memory is definitely sort of a moat. As an example, I'm working on a relatively niche problem in computer vision (small, low-resolution images) and ChatGPT now "knows" this and tailors its responses accordingly. With other chatbots I need to provide this context every time else I get suggestions oriented towards the most common scenarios in the literature, which don't work at all for my use-case.

    That may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now. I asked ChatGPT to roast me again at the end of last year, and I was a bit taken aback that it had even figured out the broader problem I'm working on and the high level approach I'm taking, something I had never explicitly mentioned. In fact, it even nailed some aspects of my personality that were not obvious at all from the chats.

    I'm not saying it's a deep moat, especially for the less frequent users, but it's there.

    • > may seem minor, but it compounds over time and it's surprising how much ChatGPT knows about me now

      I’m not saying it’s minor. And one could argue first-mover advantages are a form of moat.

      But the advantage is limited to those who have used ChatGPT. For anyone else, it doesn’t apply. That’s different from a moat, which tends to be more fundamental.

      1 reply →

    • Sounds similar to how psychics work. Observing obvious facts and pattern matching, except in this case you made the job super easy for the psychic because you gave it a _ton_ of information, instead of a psychic having to infer from the clothes you wear, your haircut, hygiene, demeanor, facial expression etc.

      1 reply →

> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.

Branding isn't a moat when, as far as the mass market is concerned, you are 2 years old.

Branding is a moat when you're IBM, Microsoft (and more recently) Google, Meta, etc.

You can prompt the model to dump all of the memory into a text file and import that.

In the onboarding flow, I can ask you, "Do you use another LLM?" If so, give it this prompt and then give me the memory file that outputs.

> Their moat in the consumer world is the branding and the fact open ai has 'memory' which you can't migrate to another provider.

Their 'memory' is mostly unhelpful and gets in the way. At best it saves you from prompting some context, but more often than not it adds so much irrelevant context that it over fits responses so hard that it makes them completely useless, specially in exploratory sessions.

I just learned Gemini has "memory" because it mixed its response to a new query with a completely unrelated query I had beforehand, despite making separate chats for them. It responded as if they were the same chat. Garbage.

  • I recently discovered that if a sentence starts with "remember", Gemini writes the rest of it down as standing instructions. Maybe go look in there and see if there is something surprising.

  • Its a recent addition. You can view them in some settings menu. Gemini also has scheduled triggers like "Give me a recap of the daily news every day at 9am based on my interests" and it will start a new chat with you every day at 9am with that content.

Couldn't you just ask it to write down what it knows about you and copy paste into another provider?