Comment by reissbaker
7 hours ago
Generally speaking, 8xH200s will be a lot cheaper than 16xH100s, and faster too. But both should technically work.
7 hours ago
Generally speaking, 8xH200s will be a lot cheaper than 16xH100s, and faster too. But both should technically work.
You can do it and may be ok for single user with idle waiting times, but performance/throughput will be roughly halved (closer to 2/3) and free context will be more limited with 8xH200 vs 16xH100 (assuming decent interconnect). Depending a bit on usecase and workload 16xH100 (or 16xB200) may be a better config for cost optimization. Often there is a huge economy of scale with such large mixture of expert models so that it would even be cheaper to use 96 GPU instead of just 8 or 16. The reasons are complicatet and involve better prefill cache, less memory transfer per node.