← Back to context

Comment by xphos

2 months ago

Yeah but a comparison in power utilization is needed too. You can build hardware that is better than a GPU at something i.e MatMul being really efficient and fast. However, actual FFT hardware would annihilate power and speed at large enough n. Simply because the number of multiplications MatMul does is O(n^3) as opposed to the O(n log n) multiplies that FFT does (complex verse real multiplies with holding).

FFT is only O(N log N) for a vector of length N WRT to matrices for an N by N matrix it would be like O(N^2 log N) you would perform FFT for each row or column

  • Thank you for that catch.

    I still think we are comparing ASIC matmul hardware to non ASIC FFT hardware. The given TPU hardware is doing 256x256 matrix multiplication in linear time by using 256x256 multiplier grids. FFT ASIC could like do the same thing but be able to handle a much higher N size before memory becomes the bottleneck.

    • Part of the FFT can be accelerated on GPU hardware, which is full of butterfly-like instructions within warps. Using overlap-and-add/overlap-and-save and cuFFTDx can also make use of heavy parrallelism within shared memory. I had a hard time reproducing the tcFFT paper (for lack of time and tensor core skills I guess) but you can also keep your data in Tensor Core registers too, apparently.

    • The downside of a dedicated ASIC, besides the fixed size of the multipliers, which isn't that big of a deal because matrix multiplication can be broken down into blocks anyway, is the precision(16-bit, 8-bit) and data format (floating point vs. integer/fixed) are immutable