Aside: Anyone else running qemu KVM on Ryzen seeing abysmal performance for some IO-heavy workloads (glusterfs in this case)?
Libvirt on debian. The vCPUs apparently spend a good amount of time in idle (not steal or wait) when. Getting up to ca 50MB/s streaming to disk over network while unvirtualized basically saturating 2.5GE on the same hardware. Looking at perf the culprit seems to be a spinlock eating up most of cpu time. Can't seem to figure this one out and I've experimented like crazy with CPU pinning, SATA/NVMe/PCIe passthrough varieties, over- and underprovisioning, memory ballooning, clock sources, every magic kernel parameter that seemed vaguely related when searching, etc.
I haven't seen anything like this on the Intel boxes running the same hypervisor setup.
It blows my mind how reliably AMD shoots itself in the foot. What we want isn’t that hard:
1) Support your graphics cards on linux using kernel drivers that you upstream. All of them. Not just a handful - all the ones you sell from say 18 months ago till today.
2) Make GPU acceleration actually work out of the box for pytorch and tensorflow. Not some special fork, patched version that you “maintain” on your website, the tip of the main branch for both of those libraries should just compile out of the box and give people gpu-accelerated ML.
This is table stakes but it blows my mind that they keep making press releases and promises like this that things are on the roadmap without doing thing one and unfucking the basic dev experience so people can actually use their GPUs for real work.
How it actually is:
1) Some cards work with rocm, some cards work with one of the other variations of BS libraries they have come up with over the years. Some cards work with amdgpu but many only work with proprietary kernel drivers which means if you don’t use precisely one of the distributions and kernel versions that they maintain you are sool.
2) Nothing whatsoever builds out of the box and when you get it to build almost nothing runs gpu accelerated. For me, pytorch requires a special downgrade, a python downgrade and a switch to a fork that AMD supposedly maintain although it doesn’t compile for me and when I managed to beat it into a shape where it compiled it wouldn’t run GPU accelerated even though games use the GPU just fine. I have a GPU that is supposedly current, so they are actively selling it, but can I use it? Can I bollocks. Ollama won’t talk to my GPU even though it supposedly works with ROCm. It only works with ROCm with some graphics cards. Tensorflow similar story when I last tried it although admittedly I didn’t try as hard as pytorch.
Just make your shit work so that people can use it. It really shouldn’t be that hard. The dev experience with NVidia is a million times better.
Sucks that you've had so much trouble... My experience with my cheap 6850XT is that it just works OOTB on Arch with rocm, llama.cpp, ollama, whisper, etc by setting an envvar.
> Support your graphics cards on linux using kernel drivers that you upstream. All of them. Not just a handful - all the ones you sell from say 18 months ago till today.
All the stuff works even if it's not officially supported. It's not that hard to set a single environment variable (HSA_OVERRIDE_GFX_VERSION).
Like literally, everything works, from Vega 56/64 to ryzen 99xx iGPUs.
Also, try nixos. Everything literally works with a single config entry after recent merge of rocm 6.3. I successfully run a zoo of various radeons of different generations.
IIRC there was only one AMD employee that was working to integrate linux based things. Often, the response was - things are stuck in Intellectual property, or project managers etc. So even specs were not available.
AMD has two driver teams at this point. One of Linux/Open Source, one for Catalyst/Closed source, and they are not allowed to interact.
Because, there are tons of IP and trade secrets involved in driver development and optimization. Sometimes game related, sometimes for patching a rogue application which developers can't or don't fix, etc. etc.
GPU drivers are ought to be easy, but in reality, they are not. The open source drivers are "vanilla" drivers without all these case-dependent patching and optimization. Actually, they really work well out of the box for normal desktop applications. I don't think there are any cards which do (or will) not work with the open source kernel drivers as long as you use a sufficiently recent version.
...and you mention ROCm.
I'm not sure how ROCm's intellectual underpinnings are but, claiming lack of effort is a bit unfair to AMD. Yes, software was never their strong suit, but they're way better when compared to 20 years earlier. They have a proper open source driver which works, and a whole fleet of open source ROCm packages, which is very rigorously CI/CD tested by their maintainers now.
Do not forget that some of the world's most powerful supercomputers run on Instinct cards, and AMD is getting tons of experience from these big players. If you think the underpinnings of GPGPU libraries are easy, I can only say that the reality is very different. The simple things people do with PyTorch and other very high level libraries pull enormous tricks under the hood, and you're really pushing the boundaries of the hardware performance and capability-wise.
NVIDIA is not selling a tray full of switches and GPUs and require OEMs to integrate it as-is for no reason. On the other hand, the same NVIDIA acts very slowly to enable an open source ecosystem.
So, yes, AMD is not in an ideal position right now, but calling them incompetent doesn't help either.
P.S.: The company which fought for a completely open source HDMI 2.1 capable display driver is AMD, not NVIDIA.
A laundry list of excuses ... or a list of things to work on. ("Why the hell do we have two driver teams?" - would be my #1 thing to fix if I was at AMD.)
I accept that there are two teams for reasons that include IP. However, Nvidia must have the same problem and they appear not to be hamstrung by it. So what is the difference?
Fact of the matter is that I have a Radeon RX 6600, which I can't use with ollama. First, there is no ROCm at all in my distros repository - it doesn't compile reliably and needs too many ressources. Then, when compiling it manually, it turns out that ROCm doesn't even support the card in the first place.
I'm aware that 8GB Vram are not enough for most such workloads. But no support at all? That's ridiculous. Let me use the card and fall back to system memory for all I care.
Nvidia, as much as I hate their usually awfully insufficient linux support, has no such restrictions for any of their modern cards, as far as I'm aware.
SemiAnalysis had a good article on this recently, basically the reason AMD still sucks on the ML software side is that their compensation for devs is significantly worse than competitors like NVidia, Google and OpenAI, so most of the most competent devs go elsewhere.
Aside: Anyone else running qemu KVM on Ryzen seeing abysmal performance for some IO-heavy workloads (glusterfs in this case)?
Libvirt on debian. The vCPUs apparently spend a good amount of time in idle (not steal or wait) when. Getting up to ca 50MB/s streaming to disk over network while unvirtualized basically saturating 2.5GE on the same hardware. Looking at perf the culprit seems to be a spinlock eating up most of cpu time. Can't seem to figure this one out and I've experimented like crazy with CPU pinning, SATA/NVMe/PCIe passthrough varieties, over- and underprovisioning, memory ballooning, clock sources, every magic kernel parameter that seemed vaguely related when searching, etc.
I haven't seen anything like this on the Intel boxes running the same hypervisor setup.
If AMD does deliver on client dGPU virtualization it would be amazing.
Some old AMD workstation GPUs supported SR-IOV, that repo was just archived.
https://open-iov.org/index.php/GPU_Support#AMD
https://github.com/GPUOpen-LibrariesAndSDKs/MxGPU-Virtualiza...
As "AI" use cases mature, NPU/AIE-ML virtualization will also be needed.
related: (AMD 2.0 – New Sense of Urgency)
https://news.ycombinator.com/item?id=43780972
That's pretty sick. Nice to see such things trickle down to consumer GPU's.
It blows my mind how reliably AMD shoots itself in the foot. What we want isn’t that hard:
1) Support your graphics cards on linux using kernel drivers that you upstream. All of them. Not just a handful - all the ones you sell from say 18 months ago till today.
2) Make GPU acceleration actually work out of the box for pytorch and tensorflow. Not some special fork, patched version that you “maintain” on your website, the tip of the main branch for both of those libraries should just compile out of the box and give people gpu-accelerated ML.
This is table stakes but it blows my mind that they keep making press releases and promises like this that things are on the roadmap without doing thing one and unfucking the basic dev experience so people can actually use their GPUs for real work.
How it actually is: 1) Some cards work with rocm, some cards work with one of the other variations of BS libraries they have come up with over the years. Some cards work with amdgpu but many only work with proprietary kernel drivers which means if you don’t use precisely one of the distributions and kernel versions that they maintain you are sool.
2) Nothing whatsoever builds out of the box and when you get it to build almost nothing runs gpu accelerated. For me, pytorch requires a special downgrade, a python downgrade and a switch to a fork that AMD supposedly maintain although it doesn’t compile for me and when I managed to beat it into a shape where it compiled it wouldn’t run GPU accelerated even though games use the GPU just fine. I have a GPU that is supposedly current, so they are actively selling it, but can I use it? Can I bollocks. Ollama won’t talk to my GPU even though it supposedly works with ROCm. It only works with ROCm with some graphics cards. Tensorflow similar story when I last tried it although admittedly I didn’t try as hard as pytorch.
Just make your shit work so that people can use it. It really shouldn’t be that hard. The dev experience with NVidia is a million times better.
Sucks that you've had so much trouble... My experience with my cheap 6850XT is that it just works OOTB on Arch with rocm, llama.cpp, ollama, whisper, etc by setting an envvar.
> Support your graphics cards on linux using kernel drivers that you upstream. All of them. Not just a handful - all the ones you sell from say 18 months ago till today.
All the stuff works even if it's not officially supported. It's not that hard to set a single environment variable (HSA_OVERRIDE_GFX_VERSION).
Like literally, everything works, from Vega 56/64 to ryzen 99xx iGPUs.
Also, try nixos. Everything literally works with a single config entry after recent merge of rocm 6.3. I successfully run a zoo of various radeons of different generations.
IIRC there was only one AMD employee that was working to integrate linux based things. Often, the response was - things are stuck in Intellectual property, or project managers etc. So even specs were not available.
AMD has two driver teams at this point. One of Linux/Open Source, one for Catalyst/Closed source, and they are not allowed to interact.
Because, there are tons of IP and trade secrets involved in driver development and optimization. Sometimes game related, sometimes for patching a rogue application which developers can't or don't fix, etc. etc.
GPU drivers are ought to be easy, but in reality, they are not. The open source drivers are "vanilla" drivers without all these case-dependent patching and optimization. Actually, they really work well out of the box for normal desktop applications. I don't think there are any cards which do (or will) not work with the open source kernel drivers as long as you use a sufficiently recent version.
...and you mention ROCm.
I'm not sure how ROCm's intellectual underpinnings are but, claiming lack of effort is a bit unfair to AMD. Yes, software was never their strong suit, but they're way better when compared to 20 years earlier. They have a proper open source driver which works, and a whole fleet of open source ROCm packages, which is very rigorously CI/CD tested by their maintainers now.
Do not forget that some of the world's most powerful supercomputers run on Instinct cards, and AMD is getting tons of experience from these big players. If you think the underpinnings of GPGPU libraries are easy, I can only say that the reality is very different. The simple things people do with PyTorch and other very high level libraries pull enormous tricks under the hood, and you're really pushing the boundaries of the hardware performance and capability-wise.
NVIDIA is not selling a tray full of switches and GPUs and require OEMs to integrate it as-is for no reason. On the other hand, the same NVIDIA acts very slowly to enable an open source ecosystem.
So, yes, AMD is not in an ideal position right now, but calling them incompetent doesn't help either.
P.S.: The company which fought for a completely open source HDMI 2.1 capable display driver is AMD, not NVIDIA.
A laundry list of excuses ... or a list of things to work on. ("Why the hell do we have two driver teams?" - would be my #1 thing to fix if I was at AMD.)
3 replies →
I accept that there are two teams for reasons that include IP. However, Nvidia must have the same problem and they appear not to be hamstrung by it. So what is the difference?
3 replies →
Fact of the matter is that I have a Radeon RX 6600, which I can't use with ollama. First, there is no ROCm at all in my distros repository - it doesn't compile reliably and needs too many ressources. Then, when compiling it manually, it turns out that ROCm doesn't even support the card in the first place.
I'm aware that 8GB Vram are not enough for most such workloads. But no support at all? That's ridiculous. Let me use the card and fall back to system memory for all I care.
Nvidia, as much as I hate their usually awfully insufficient linux support, has no such restrictions for any of their modern cards, as far as I'm aware.
1 reply →
SemiAnalysis had a good article on this recently, basically the reason AMD still sucks on the ML software side is that their compensation for devs is significantly worse than competitors like NVidia, Google and OpenAI, so most of the most competent devs go elsewhere.
This article is almost unreadable for me. The ads change in size and make the text jump. I'm adding it to NotebookLM now.
The article is extremely light on details anyway. The most important thing in it is the link to the repo at https://github.com/amd/MxGPU-Virtualization