← Back to context

Comment by Y_Y

2 months ago

What's wrong with complex numbers on GPUs? You don't have to do anything special. It's obviously faster if you can make simplifying assumptions like "the input signal is purely real" but otherwise at worst you're dealing with pairs of reals (or floats) and don't have to think about philosophical implications.

https://docs.nvidia.com/cuda/cufft/

gpus dont implement complex number fp math, you have to bolt it on as extra logic. cufft works because you can recursively predict the imaginary and real component paths in the butterfly network. between layers you have fft->ifft , is this cost memory locality-wise worth it, or is it better to find ways to tamp down n in n^2 self attention by windowing, batching, gating, many other solutions. im not saying this work isn't cool, FNOs are really cool especially for solving PINNs and related continuous problems, are llms continuous problems, does n have to span the entire context window? I'll probably end up experimenting with this as theyve made the code available, but sometimes good theory is good theory, but not necessarily practical.