Comment by GeekyBear
7 hours ago
Deepseek's custom PTX code has previously outperformed CUDA running on Nvidia H800 GPUs.
> DeepSeek made quite a splash in the AI industry by training its Mixture-of-Experts (MoE) language model with 671 billion parameters using a cluster featuring 2,048 Nvidia H800 GPUs in about two months, showing 10X higher efficiency than AI industry leaders like Meta. The breakthrough was achieved by implementing tons of fine-grained optimizations and usage of Nvidia's assembly-like PTX (Parallel Thread Execution) programming instead of Nvidia's CUDA for some functions,
https://www.tomshardware.com/tech-industry/artificial-intell...
Custom code targeting one specific hardware implementation can improve performance quite a bit.
No comments yet
Contribute on Hacker News ↗