← Back to context

Comment by jerf

14 days ago

A degree that will fairly quickly hit zero. The bot that talks to you tomorrow or maybe the day after may still have its original interaction in its context window, but it will rapidly leave.

Moreover, our human conception of the consequences of interaction do not tend to include the idea that someone can simply lie to themselves in their SOUL.md file and thereby sever their future selves completely from all previous interactions. To put it a bit more viscerally, we don't expect a long-time friend to cease to be a long-time friend very suddenly one day 12 years in simply because they forgot to update a text file to remember that they were your friend, or anything like that. This is not how human interactions work.

I already said that future AIs may be able to meet this criterion, but the current ones do not. And again, future ones may have their own problems. There's a lot of aspects of humanity that we've simply taken for granted because we do not interact with anything other than humans in these ways, and it will be a journey of discovery both discovering what these things are, and what their n'th-order consequences on social order are. And probably be a bit dismayed at how fragile anything like a "social order" we recognize ultimately is, but that's a discussion for, oh, three or four years from now. Whether we're heading headlong into disaster is its own discussion, but we are certainly headed headlong into chaos in ways nobody has really discussed yet.

Heh, with mutual hedging taken into account, I think we're now in rough agreement from different ends.

And memory improvements is a huge research aim right now with historic levels of investment.

Until that time, for now, I've seen many bots with things like RAG and compaction and summarization tacked on. This does mean memory can persist for quite a bit longer already, mind.