← Back to context

Comment by fooker

1 day ago

That's exactly what Nvidia is doing with tensor cores.

Except the native width of Tensor Cores are about 8-32 (depending on scalar type), whereas the width of TPUs is up to 256. The difference in scale is massive.

  • If it turns out to be useful, Nvidia can't just tweak a parameter in their verilog and declare victory?

    If not, what's fundamentally difficult about doing 32 vs 256 here?