Comment by yagizdegirmenci
2 months ago
The section "3.3 Implementation" is mostly about hardware level speedups, which basically says:
On GPU(s) FFT is consistently faster, but in TPU(s), for shorter sequences matrix multiplication was faster.
2 months ago
The section "3.3 Implementation" is mostly about hardware level speedups, which basically says:
On GPU(s) FFT is consistently faster, but in TPU(s), for shorter sequences matrix multiplication was faster.
Yeah but a comparison in power utilization is needed too. You can build hardware that is better than a GPU at something i.e MatMul being really efficient and fast. However, actual FFT hardware would annihilate power and speed at large enough n. Simply because the number of multiplications MatMul does is O(n^3) as opposed to the O(n log n) multiplies that FFT does (complex verse real multiplies with holding).
FFT is only O(N log N) for a vector of length N WRT to matrices for an N by N matrix it would be like O(N^2 log N) you would perform FFT for each row or column
Thank you for that catch.
I still think we are comparing ASIC matmul hardware to non ASIC FFT hardware. The given TPU hardware is doing 256x256 matrix multiplication in linear time by using 256x256 multiplier grids. FFT ASIC could like do the same thing but be able to handle a much higher N size before memory becomes the bottleneck.
3 replies →