Comment by t55
3 days ago
They basically ditched CUDA and went straight to writing in PTX, which is like GPU assembly, letting them repurposing some cores for communication to squeeze out extra performance. I believe that with better AI models and tools like Cursor, we will move to a world where you can mold code ever more specific to your use case to make it more performant.
Are you sure they ditched CUDA? I keep hearing this, but it seems odd because that would be a ton of extra work to entirely ditch it vs selectively employing some ptx in CUDA kernels which is fairly straightforward.
Their paper [1] only mentions using PTX in a few areas to optimize data transfer operations so they don't blow up the L2 cache. This makes intuitive sense to me, since the main limitation of the H800 vs H100 is reduced nvlink bandwidth, which would necessitate doing stuff like this that may not be a common thing for others who have access to H100s.
1. https://arxiv.org/abs/2412.19437
I should have been more precise, sorry. Didn't want to imply they entirely ditched CUDA but basically circumvented it in a few areas like you said.
Targeting directly PTX is perfectly regular CUDA, and used by many toolchains that target the ecosystem.
CUDA is not only C++, as many mistake it for.
got it, thanks for explaining.
> with better AI models and tools like Cursor, we will move to a world where you can mold code ever more specific to your use case to make it more performant
what do you think the value of having the right abstraction will be in such a world?
I think that for at least for us dumb humans with limited memory, having good abstractions makes things much easier to understand
Yes, but I wonder how much of this trait is carried over to the LLMs from us.
4 replies →