Comment by fooker 2 days ago That's exactly what Nvidia is doing with tensor cores. 3 comments fooker Reply bjourne 2 days ago Except the native width of Tensor Cores are about 8-32 (depending on scalar type), whereas the width of TPUs is up to 256. The difference in scale is massive. fooker 1 day ago If it turns out to be useful, Nvidia can't just tweak a parameter in their verilog and declare victory?If not, what's fundamentally difficult about doing 32 vs 256 here? saagarjha 1 day ago Nobody cares about width; they care about TFLOPs.
bjourne 2 days ago Except the native width of Tensor Cores are about 8-32 (depending on scalar type), whereas the width of TPUs is up to 256. The difference in scale is massive. fooker 1 day ago If it turns out to be useful, Nvidia can't just tweak a parameter in their verilog and declare victory?If not, what's fundamentally difficult about doing 32 vs 256 here? saagarjha 1 day ago Nobody cares about width; they care about TFLOPs.
fooker 1 day ago If it turns out to be useful, Nvidia can't just tweak a parameter in their verilog and declare victory?If not, what's fundamentally difficult about doing 32 vs 256 here?
Except the native width of Tensor Cores are about 8-32 (depending on scalar type), whereas the width of TPUs is up to 256. The difference in scale is massive.
If it turns out to be useful, Nvidia can't just tweak a parameter in their verilog and declare victory?
If not, what's fundamentally difficult about doing 32 vs 256 here?
Nobody cares about width; they care about TFLOPs.