← Back to context Comment by Neywiny 2 months ago 32 cores on a die, 256 on a package. Still stunning though 11 comments Neywiny Reply bee_rider 2 months ago How do people use these things? Map MPI ranks to dies, instead of compute nodes? wmf 2 months ago Yeah, there's an option to configure one NUMA node per CCD that can speed up some apps. janwas 1 month ago Gemma.cpp has nested thread pools, one per chiplet, and one across all chiplets. With such core counts it is quite important to minimize any kind of sharing, even RMW atomics. markhahn 2 months ago MPI is fine, but have you heard of threads? bee_rider 2 months ago Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…* I’m wondering if explicit communication is better from one die to another in this sort of system. 6 replies →
bee_rider 2 months ago How do people use these things? Map MPI ranks to dies, instead of compute nodes? wmf 2 months ago Yeah, there's an option to configure one NUMA node per CCD that can speed up some apps. janwas 1 month ago Gemma.cpp has nested thread pools, one per chiplet, and one across all chiplets. With such core counts it is quite important to minimize any kind of sharing, even RMW atomics. markhahn 2 months ago MPI is fine, but have you heard of threads? bee_rider 2 months ago Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…* I’m wondering if explicit communication is better from one die to another in this sort of system. 6 replies →
wmf 2 months ago Yeah, there's an option to configure one NUMA node per CCD that can speed up some apps.
janwas 1 month ago Gemma.cpp has nested thread pools, one per chiplet, and one across all chiplets. With such core counts it is quite important to minimize any kind of sharing, even RMW atomics.
markhahn 2 months ago MPI is fine, but have you heard of threads? bee_rider 2 months ago Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…* I’m wondering if explicit communication is better from one die to another in this sort of system. 6 replies →
bee_rider 2 months ago Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…* I’m wondering if explicit communication is better from one die to another in this sort of system. 6 replies →
How do people use these things? Map MPI ranks to dies, instead of compute nodes?
Yeah, there's an option to configure one NUMA node per CCD that can speed up some apps.
Gemma.cpp has nested thread pools, one per chiplet, and one across all chiplets. With such core counts it is quite important to minimize any kind of sharing, even RMW atomics.
MPI is fine, but have you heard of threads?
Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but
* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…
* I’m wondering if explicit communication is better from one die to another in this sort of system.
6 replies →