Comment by walterbell
6 months ago
Additional text from Google's 2017 paper abstract says:
This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory.
The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency.
The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters.
what's the memory bandwidth? IIRC that is the limiting factor in LLM hardware today
Slide 21, https://files.futurememorystorage.com/proceedings/2024/20240...
hence the out of date part of my comment
Recent (2024) description by Google, https://cloud.google.com/blog/transform/ai-specialized-chips...