Comment by Neywiny 1 month ago 32 cores on a die, 256 on a package. Still stunning though 11 comments Neywiny Reply bee_rider 1 month ago How do people use these things? Map MPI ranks to dies, instead of compute nodes? wmf 1 month ago Yeah, there's an option to configure one NUMA node per CCD that can speed up some apps. janwas 1 month ago Gemma.cpp has nested thread pools, one per chiplet, and one across all chiplets. With such core counts it is quite important to minimize any kind of sharing, even RMW atomics. markhahn 1 month ago MPI is fine, but have you heard of threads? bee_rider 1 month ago Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…* I’m wondering if explicit communication is better from one die to another in this sort of system. 6 replies →
bee_rider 1 month ago How do people use these things? Map MPI ranks to dies, instead of compute nodes? wmf 1 month ago Yeah, there's an option to configure one NUMA node per CCD that can speed up some apps. janwas 1 month ago Gemma.cpp has nested thread pools, one per chiplet, and one across all chiplets. With such core counts it is quite important to minimize any kind of sharing, even RMW atomics. markhahn 1 month ago MPI is fine, but have you heard of threads? bee_rider 1 month ago Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…* I’m wondering if explicit communication is better from one die to another in this sort of system. 6 replies →
wmf 1 month ago Yeah, there's an option to configure one NUMA node per CCD that can speed up some apps.
janwas 1 month ago Gemma.cpp has nested thread pools, one per chiplet, and one across all chiplets. With such core counts it is quite important to minimize any kind of sharing, even RMW atomics.
markhahn 1 month ago MPI is fine, but have you heard of threads? bee_rider 1 month ago Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…* I’m wondering if explicit communication is better from one die to another in this sort of system. 6 replies →
bee_rider 1 month ago Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…* I’m wondering if explicit communication is better from one die to another in this sort of system. 6 replies →
How do people use these things? Map MPI ranks to dies, instead of compute nodes?
Yeah, there's an option to configure one NUMA node per CCD that can speed up some apps.
Gemma.cpp has nested thread pools, one per chiplet, and one across all chiplets. With such core counts it is quite important to minimize any kind of sharing, even RMW atomics.
MPI is fine, but have you heard of threads?
Sure, the conventional way of doing things is OpenMP on a node and MPI across nodes, but
* It just seems like a lot of threads to wrangle without some hierarchy. Nested OpenMP is also possible…
* I’m wondering if explicit communication is better from one die to another in this sort of system.
6 replies →