Comment by cs702
1 year ago
As an aside, at one point I experimented a little with transformers that had access to external memory searchable via KNN lookups https://github.com/lucidrains/memorizing-transformers-pytorc... (great work by lucidrains) or via routed queries with https://github.com/glassroom/heinsen_routing (don't fully understand it; apparently related to attention). Both approaches seemed to work, but I had to put that work on hold for reasons outside my control.
Also as an aside, I'll add that transformers can be seen as a kind of "RNN" that grows its hidden state with each new token in the input context. I wonder if we will end up needing some new kind of "RNN" that can grow or shrink its hidden state and also access some kind of permanent memory as needed at each step.
We sure live in interesting times!
> transformers that had access to external memory searchable via KNN lookups
This is common, and commonly called retrieval augmented generation, or RAG.
edit: I did not pay attention to the link. It is about Wu et al's "Memorizing Transformers", which contain an internal memory.
No. RAG is about finding relevant documents/paragraphs (via KNN lookups of their embeddings) and then inserting those documents/paragraphs into the input context, as sequences of input tokens. What I'm talking about is different: https://arxiv.org/abs/2203.08913
I don't think the ability to shrink state is needed. You can always represent removed state by additional state that represents deletion of whatever preceding state was there. If anything, this sounds more useful because the fact that this state is no longer believed to be relevant should prevent looping (where it would be repeatedly brought in, considered, and rejected).
> You can always represent removed state by additional state that represents deletion of whatever preceding state was there.
Good point. Thank you!