Comment by pointlessone

2 months ago

OK, I admit that the math flies way over my head and I barely understand the text around the math. Can someone please explain (in basic English) how this is equivalent to attention mechanism? What friquencies does it talk about? How does it encode positional relations between tokens?

- The Fourier Transform is an invertable operator (i.e. it acts on functions, in the case of matrices both functions and operators are themselves matrices). It transforms into what we call frequency space.

- This is most intuitive for signal analysis or images [1].

- Frequency space is inherently "complex", i.e. represented by complex numbers.

- Frequencies have the advantage that they take a "global" view of the problem.

- This mechanism is not equivalent to the attention mechanism. There is definitely a trade-off.

- But it is possible that it captures many of the important relationships that attention capture.

- I do not have good intuition for modReLU right away, but it seems important because it modifies the frequencies but preserves the inverse Fourier transform.

[1]: https://homepages.inf.ed.ac.uk/rbf/HIPR2/fourier.htm

  • Worth noting that frequency space is often considered one dimensional. Adding the phase is what gives the other dimension.

  • modReLU seems to just increase the magnitude of the input value, and rotate it to the original polar angle. With clipping off negative magnitudes.

    Or equivalently rotates a (real) bias term with the input angle and adds that into the original.

      (abs(z) + c)*exp(i*arg(z)) = abs(z)*exp(i*arg(z)) + c*exp(i*arg(z)) = z + c*exp(i*arg(z))

The actual mechanism at least is quite simple. Essentially it takes the FFT of the input embeddings, multiplies it elementwise with weights that are gotten from the input embeddings using an MLP (plus a constant (but learnable) bias) and then runs it through an activation function and finally takes the inverse FFT.

The "frequencies" are probably something quite abstract. FFT is often used in ways where there aren't really clear frequency interpretation. The use is due to convenient mathematical properties (e.g. the convolution theorem).

Rather amazing if this really works well. Very elegant.

  • I’m still confused. Does it treat the input tokens as a sampled waveform?

    I mean, say I have some text file in ASCII. Do I then just pretend it’s raw wav and do FFT on it? I guess it can give me some useful information (like does it look like any particular natural language or is it just random; sometimes used in encrytion analysis of simple substitution cyphers). It feels surprising that revers FFT can get a coherent output after fiddling with the distribution.

    • Do keep in mind that FFT is a lossless, equivalent representation of the original data.

    • As I understand it, the token embedding stream would be equivalent to multi-channel sampled waveforms. The model either needs to learn the embeddings by back-propagating through FFT and IFFT, or use some suitable tokenization scheme which the paper doesn't discuss (?).

      It seems unlikely to work for language.

    • It embeds them first into vectors. The input is a real matrix with (context length)x(embedding size) dimensions.

    • No. The FFT is an operation on a discrete domain, it is not the FT. In the same way audio waveforms are processed by an FFT you bucket frequencies which is conceptually a vector. Once you have a vector, you do machine learning like you would with any vector (except you do some FT in this case, I haven’t read the paper).

I am not an expert by _any_ means, but to provide _some_ intuition — self-attention is ultimately just a parameterised token mixer (see https://medium.com/optalysys/attention-fourier-transforms-a-...) i.e. each vector in the output depends upon the corresponding input vector transformed by some function of all the other input vectors.

You can see conceptually how this is similar to a convolution with some simplification, e.g. https://openreview.net/pdf?id=8l5GjEqGiRG

Convolutions are often used in contexts where you want to account for global state in some way. - https://openreview.net/pdf?id=8l5GjEqGiRG