Comment by chemmail

3 months ago

All you have to look at is ASIC miners. Once they had them, they were 10x faster than GPUs easily and made GPUs useless for those algos. Something very similar can happen soon.

The fundamentals are different. Bitcoin mining is not intrinsically suited to acceleration on a GPU. It is a not-very-wide serial integer operation.

AI inference on the other hand is basically just very large floating point tensor matrix multiplication. What does an ASIC for matmul look like? A GPU.

  • No, GPUs are not a particularly ideal architecture for AI inference, it's just inference needs way more memory bandwidth than a general purpose CPU's memory hierarchy can handle.

    > What does an ASIC for matmul look like?

    A systolic array, and ultimately quite different than a GPU. This is why TPUs et all are a thing.

    In general with a systolic array you get a quadratic speedup. For example with a 256x256 array, it takes 256 cycles to shift operations in and out, but in doing that you accomplish 65k MACs in 512 cycles for a speedup of 128x over serial.

  • Sorta? If that was the full story, TPU would not be a thing.

    • I'm not an expert in chip design by any means but I think it's fair to say that TPU is a marketing term and it's not substantially different from a GPU like an H100. H100's cores are also called "Tensor Cores."

      1 reply →

    • TPUs are not fundamentally different or more efficient than NVIDIA hardware. They are just cutting out the middleman.