← Back to context

Comment by raverbashing

7 days ago

LLMs are missing 3 things (even if they ingest the whole of knowledge):

- long term memory

- trust

- (more importantly) the ability to nudge or to push the person to change. An LLM that only agrees and sympathizes is not going to make things change

For a bit now ChatGPT has been able to reference your entire chat history. It was one of the biggest and most substantial improvements to the product in its history in my opinion. I'm sure we'll continue to see improvements in this feature over time, but your first item here is already partially addressed (maybe fully).

I completely agree on the third item. Carefully tuned pushback is something that even today's most sophisticated models are not very good at. They are simply too sycophantic. A great human professional therapist provides value not just by listening to their client and offering academic insights, but more specifically by knowing exactly when and how to push back -- sometimes quite forcefully, sometimes gently, sometimes not at all. I've never interacted with any LLM that can approach that level of judgment -- not because they lack the fundamental capacity, but because they're all simply trained to be too agreeable right now.

You can easily give them long-term memory, and you can prompt them to nudge the person to change. Trust is something that's built, not something one inherently has.

> trust

Trust is about you, not about another person (or tool, or AI model).

> long term memory

Well, right now you need to put context by hand. If you already write about yourself (e.g. with Obsidian or such), you may copy-and-paste what matters for a particular problem.

> (more importantly) the ability to nudge or to push the person to change.

It is there.

> An LLM that only agrees and sympathizes is not going to make things change

Which LLM you use? Prompt GPT 4.5 to "nudge and push me to change, in a way that works the best for me" and see it how it works.

  • > If you already write about yourself (e.g. with Obsidian or such), you may copy-and-paste what matters for a particular problem.

    Wrong, because identifying what's part of the context is part of the problem. If you could just pick up what is relevant then the problem would be much easier

    > Prompt GPT 4.5 to "nudge and push me to change, in a way that works the best for me" and see it how it works.

    Cool you try that and you see how it goes. And remember that when it fails you'll only have yourself to blame then

    • > Wrong, because identifying what's part of the context is part of the problem. If you could just pick up what is relevant then the problem would be much easier

      Well, it is one reason why it depends a lot on the user's knowledge of psychology and your general intro- and retrospective skills. As I mentioned, in unskilled hands it may have limited value, or be actively harmful. The same way as, say, using internet for getting medical advice. An skilled person will dive into the newest research; an unskilled is more likely to be captivated by some alt-med (or find medical research, but misinterpret it).

      > And remember that when it fails you'll only have yourself to blame then

      Obviously.

      Assuming you are adult - well, it's always your responsibility. No matter if it is because you listen to AI, therapist, friend, coach, online bloger, holy scriptures, anything. Still, your life is your responsibility.