← Back to context

Comment by hoss1474489

14 hours ago

GPUs in 16x slots is still important for LLM stuff, especially multi-GPU, where lots of data needs to move between cards during computation.

A 16x PCIE 6.0 setup has more bandwidth than any dual channel DDR5 memory kit.

Depends on what you're doing. I'm pretty sure the bandwidth for inference isn't much.

  • Depends, if it's tensor parallel or pipeline parallel. Only PP doesn't pass too much. TP does