Comment by franga2000

2 days ago

This is an LLM approximating a semantic calculator, based solely on trained-in knowledge of what that is and probably a good amount of sample output, yet somehow beating the results of a "real" semantic calculator. That's crazy!

The more I think about it the less surprised I am, but my initial thoughts were quite simply "now way" - surely an approximation of an NLP model made by another NLP model can't beat the original, but the LLM training process (and data volume) is just so much more powerful I guess...

This is basically the whole idea behind the transformer. Attention is much more powerful than embedding alone.

  • The transformers are initialized by embedding models...

    Your embedding model is literally the translation layer converting the text to numbers. The transformers are the main processing unit of the embeddings. You can even see some self-reflection in the model as the transformer is composed of attention and a MLP sub-network. The attention mechanism generates the interrelational dependence of the data and the MLP projects up into a higher dimension before coming down so that this can untangle these relationships. But the idea is that you just repeat this process over and over. The attention mechanism has the benefit over CNN models because it has a larger receptive field, so can better process long range relationships (long range being across the input data) where CNNs bias for local relationships.