← Back to context

Comment by hoss1474489

15 hours ago

GPUs in 16x slots is still important for LLM stuff, especially multi-GPU, where lots of data needs to move between cards during computation.

Depends on what you're doing. I'm pretty sure the bandwidth for inference isn't much.

  • Depends, if it's tensor parallel or pipeline parallel. Only PP doesn't pass too much. TP does