Comment by kouteiheika

20 days ago

If you want to prove (i.e. show that it works and/or it's faster in a real-world scenario) a new alternative to attention without breaking the bank then one of the best ways to do that would probably be to retrain an already existing model, just with swapped attention modules. Then once you have such a model you can do apples-to-apples benchmarks.

This has been done successfully in the past:

https://huggingface.co/featherless-ai/QRWKV-72B

Note that this is a 72B model which would be very expensive to train from scratch, but here they did the conversion for less than $2000.

I'd say try the nanogpt speedrun. It's much easier to train, and gives you a better comparison vs optimized systems.

https://github.com/KellerJordan/modded-nanogpt

Depending on how different the attention mechanism is, that might not work. If it’s just a faster / different way of finding the tokens to attend to, sure. But I get the sense the author is implying this method uses different semantics somehow. Although tbh I didn’t follow it entry.

This is interesting. Has there been more research into this architecture? I hear about it once every few years but it always seems like a niche / experimental thing. But based on the graph in their blog post you'd expect every company to be using this.

  • This is a novel re-interpretation of the Transformer, based on my previous research made with a library called `arrowspace`.

    It is somehow what is called a "Grassmann-like flow" but without the Plucker embedding, or also similar to what is done in DavisTensor but relying on spectral Laplacian instead of purely geometric distances.

    The problem with a lot of stuff done before is that it focuses on dense representations. This architecture is focuses on sparse representation and provides a new approximation computation based on energy-informed graphs.

thanks for reading. I cannot retrain an existing model as the self-attention mechanism has been completely redesigned. The Keys and Values in self-attention are stored as scalars, so a latent space with traditional weights does not make sense if used in the context of a topological transformer. The two latent spaces would be somehow equivalent eventually but they would store totally different values.

That doesn’t tell you if the new method continues to perform better at higher parameter counts.

  • it most-likely will in terms of performance as it uses 50% less memory (for sure it will at inference time that is the most used operation on web services), because it can leverage longer T and D if the design is confirmed and the quality of generation is comparable to other models. If this very basic assumption is correct, it means a lot of savings in electricity as the same GPUs can resolve more requests.

  • Nor that the training from scratch will even work.

    • exactly, that is the current objective. To proove that generation for a specific domain is on-par with causal attention models