The site confuses the inference engine in the Edge TPU with the datacenter TPU. They are two unrelated projects. Based on the paper they're borrowing from, I think they are trying to go for a much older datacenter inference-only TPU, or only implementing the inference capabilities of the datacenter TPU.
The /forks contained https://github.com/csirlin/OpenTGPTPU which had a commit 3 hours ago but it seems they have not yet updated the FAQ for their version. Anyway, the fact it has commits greater than 8 years ago makes it seem like a more reasonable submission
Additional text from Google's 2017 paper abstract says:
This paper evaluates a custom ASIC---called a Tensor Processing Unit (TPU)---deployed in datacenters since 2015 that accelerates the inference phase of neural networks (NN). The heart of the TPU is a 65,536 8-bit MAC matrix multiply unit that offers a peak throughput of 92 TeraOps/second (TOPS) and a large (28 MiB) software-managed on-chip memory.
The TPU's deterministic execution model is a better match to the 99th-percentile response-time requirement of our NN applications than are the time-varying optimizations of CPUs and GPUs (caches, out-of-order execution, multithreading, multiprocessing, prefetching, ...) that help average throughput more than guaranteed latency.
The lack of such features helps explain why, despite having myriad MACs and a big memory, the TPU is relatively small and low power. We compare the TPU to a server-class Intel Haswell CPU and an Nvidia K80 GPU, which are contemporaries deployed in the same datacenters.
Google does everything, both inference and training, on their TPUs.
Inference is easier, since the person deploying a model knows the architecture ahead of time and therefore can write custom code for their particular model.
When training you want to be as flexible as possible. The framework and hardware should not impose any particular architecture. This means lots of kernels and combinations of kernels. Miss one and you're out.
Google TPU engineers used open-source Chisel for ASIC design (2018), https://news.ycombinator.com/item?id=41148532
The site confuses the inference engine in the Edge TPU with the datacenter TPU. They are two unrelated projects. Based on the paper they're borrowing from, I think they are trying to go for a much older datacenter inference-only TPU, or only implementing the inference capabilities of the datacenter TPU.
Are there recent papers on datacenter TPU?
2 replies →
Yeowzers that FAQ is filled with watch-outs
The /forks contained https://github.com/csirlin/OpenTGPTPU which had a commit 3 hours ago but it seems they have not yet updated the FAQ for their version. Anyway, the fact it has commits greater than 8 years ago makes it seem like a more reasonable submission
There is an excellent paper and talk on how Google's TPU cluster is managed: https://www.usenix.org/conference/nsdi24/presentation/zu.
Can [OpenTPU] TPUs be fabricated out of graphene, with nanoimprinting or a more efficient approach?
From >> From "A carbon-nanotube-based tensor processing unit" (2024)westurner
6 months ago
What about QPUs though?
Can QPUs (Quantum Processing Units) built on with electrons in superconducting graphene ever be faster than photons in integrated nanophotonics?
There are integrated parametric single-photon emitters and detectors.
Is there a lower cost integrated nanophotonic coherent light source for [quantum] computing than a thin metal wire?
"Electrons turn piece of wire into laser-like light source" (2022) https://news.ycombinator.com/item?id=33493885
[2017] (https://arxiv.org/abs/1704.04760)
[May 2025] (https://github.com/csirlin/OpenTGPTPU/commits/master)
Wow they have kept working on this! Thanks for pointing this! very impressive.
[dead]
> The TPU is Google's custom ASIC for accelerating the inference phase of neural network computations.
this seems hopelessly out of date/confused
They're not confused at all, this is just a (correct) description of TPU v1. The repository is 8 years old.
Additional text from Google's 2017 paper abstract says:
what's the memory bandwidth? IIRC that is the limiting factor in LLM hardware today
1 reply →
hence the out of date part of my comment
1 reply →
How would you describe it instead? Curious and learning
Google does everything, both inference and training, on their TPUs.
Inference is easier, since the person deploying a model knows the architecture ahead of time and therefore can write custom code for their particular model.
When training you want to be as flexible as possible. The framework and hardware should not impose any particular architecture. This means lots of kernels and combinations of kernels. Miss one and you're out.
1 reply →