AMD’s RDNA4 GPU architecture

2 days ago (chipsandcheese.com)

Lower power consumption on a desktop monitor is an interesting technical challenge but I do wonder “Cui bono?” - obviously I’d want my gaming machine to consume less power but I’m not sure I’ve ever considered mouse-idle monitor-on power consumption when considering eg AMD versus Nvidia for my gaming machine.

Don’t get me wrong this is very interesting and AMD does great engineering and I loath to throw shade on an engineering focused company but… Is this going to convert to even a single net gain purchase for AMD?

I’m a relatively (to myself) a large AMD shareholder (colloquially: fanboy) and damn I’d love to see more focus on hardware matmul acceleration rather than idle monitor power draw.

  • Some people appreciate leaving the pc open for light tasks even at night, and wasting too much power doing nothing is... well, wasteful. Imagine a home server that has the GPU for AI or multimedia stuff.

    The same architecture will also be used in mobile, so depending on where this comes from (architecturally) it could mean more power savings there, too.

    Besides, lower power also means lower cooling/noise on idle, and shorter cooldown times after a burst of work.

    And since AMD is slowly going to the (ever next-time) unified architecture, any gains there will also mean less idle power draw in other environments, like servers.

    Nothing groundbreaking, sure, but I won't say no to all of that.

    • They have optimized idle performance for when the display is still on. Which is nice, but configuring the screen to switch off when idle will save far more power.

      1 reply →

    • > Imagine a home server that has the GPU for AI or multimedia stuff.

      I imagine you wouldn't attach a display to your home server. Would the display engine draw any power in that case?

      3 replies →

  • I definitely value lower power usage when idle. My desktop PC uses ~150W when idle. Sometimes I leave it on overnight simply downloading a remote file or some other extremely light operation.

    It doesn't make sense that it would draw this much power. A laptop can do the same thing with ~10W.

    This sort of improvement might not increase sales with one generation, but it'll make a difference if they keep focusing on this year after year. It also makes their design easier to translate into mobile platforms.

    • > I definitely value lower power usage when idle. My desktop PC uses ~150W when idle.

      These two statements appear to be in conflict. 150W is a high idle power consumption for a modern PC. Unless you have something like an internal RAID or have reached for the bottom of the barrel when choosing a power supply, 40W is on the high side for idle power consumption and many exist that will actually do ~10W.

      1 reply →

    • Energy is so cheap it's not really worth the effort (economically).

      And if it would be more expensive stuff like cooking or washing clothes would still hurt more than downloading a file with a big PC.

      2 replies →

  • Rumors have been floating around about some kind of PS6 portable or next gen steam deck with RDNA4 where power consumption matters.

    There's also simply laptop longevity that would be nice.

  • I am not saying that this was the reason I bought it, but I recently purchased a Radeon 9070 and I was surprised how little power this card uses in idle. I was seeing figures between 4W~10W on Windows (sadly slightly more on Linux).

    In general this generation of Radeon GPUs seems highly efficient. Radeon 9070 is a beast of a GPU.

  • Might be helpful to get some perspective on this. Most cards idle in the 5-10 watt range, but there have been outliers, like the Intel A770 before the drivers were mostly fixed, which ran at 45 watts.[1] I believe there have been even more stupid incidents with cards combined with early drivers, where, if you just happened to have a multi-monitor setup running at two different resolutions, it would go up 70+ watts. Obviously, these are outlier situations probably caused by some optimisations being disabled while the drivers were developed, and never re-enabled before release. 20 watts and below should be easily achievable with pretty much any card, but it's easy to forget that this is still some work that should be done, and shouldn't be left to chance and happenstance.

    [1] https://thinkcomputers.org/here-is-the-solution-for-high-idl...

  • If the tech was developed, then might as well deploy it to both laptops (likely it's original intent), as well as Desktop.

  • To wager a guess, would that optimization also help push the envelope when one application needs all the power it can get while another monitor is just sitting idle ?

    Another angle I'm wondering about is longevity of the card. Not sure if AMD would positively care in the first place, but as a user if the card didn't have to grind much on the idle parts and thus last a year or two longer, it would be pretty valuable.

    • Recent nvidia generations also about doubled their idle power consumption. Those increases are probably actual baseline increases (i.e. reduce compute power budget), while prior RDNA generations would idle at around 80-100 W doing video playback or driving more than one monitor, which is more indicative of problematic power management.

  • Power efficient chips will result in more overall performance for the same amount of total power drawn. Its all about performance/watt

  • Even if you save 0.01kw a year, if you multiply that by the number of computers, you’ll save around 100Mw of power. Even small improvements have macro level implications.

  • Another pane of AMD GPU R&D is the _userland_ _hardware_ [ring] buffers for near direct hardware userland programming.

    They started to experiment on that in mesa and linux ("user queues", as "user hardware queues").

    I don't know how they will work around the scarse VM IDs, but here, we are talking near 0 driver. Obviously, they will have to simplify/cleanup a lot 3D pipeline programming and be very sure of its robustness, basically to have it ready for "default" rendering/usage right away.

    Userland will get from the kernel stuff along those lines: command/event hardware ring buffers, data dma buffers, read/write pointers & doorbells memory page for those ring buffers, and an event file descriptor for an event ring buffer. Basically, what the kernel currently has.

    I wonder if it will provide some significant simplification than the current way which is giving indirect command buffers to the kernel and deal with 'sync objects'/barriers.

    • The NVidia driver also has userland submission (in fact it does not support kernel-mode submission at all). I don't think it leads to a significant simplification or not of the userland code, basically a driver has to keep track of the same thing it would've submitted to an ioctl. If anything there are some subtleties that require careful consideration.

      The major upside is removing the context switch on a submission. The idea is that an application only talks to the kernel for queue setup/teardown, everything else happens in userland.

      1 reply →

  • the architecture is shared between desktop and mobile. this sounds 100% like something that they did to give some dual display laptop or handheld 3 hours extra battery life by fixing something dumb.

  • In terms of heat output the difference between an idling gaming PC from 10 years ago (~30-40 W) and one today (100+ W) is very noticeable in a room. Besides, even gaming PCs are likely idle or nearly idle a significant amount of time, and that's just power wasted. There are also commercial users of desktop GPUs, and there they are idle an even bigger percentage of the time.

    • I think the power efficiency of AMD graphics is improved by a lot in the past 10 years. If you compare Rx580 and Radeon 890m. They are 7 years apart, with almost the same performance, and 12X power usage difference (the new one is so low so it can be put into a mini pc and used as igpu). It's unimaginable if you said this 7 years ago.

    • Idling "gaming PCs" idle about 30-40w.

      Your monitor configuration has always controlled idle power of a GPU (for about the past 15 years), and you need to be aware of what is "too much" for your GPU.

      RDNA4 and Series 50 is anything more than the equivalent of a single 4k 120hz kicks it out of super-idle, and it sits at around ~75W.

      7 replies →

More curious, does RDNA4 have native FP8 support?

  • I refer to the RDNA4 instruction set manual ([1]), page 90, Table 41. WMMA Instructions.

    They support FP8/BF8 with F32 accumulate and also IU4 with I32 accumulate. The max matrix size is 16x16. For comparison, NVIDIA Blackwell GB200 supports matrices up to 256x32 for FP8 and 256x96 for NVFP4.

    This matters for overall throughput, as feeding a bigger matrix unit is actually cheaper in terms of memory bandwidth, as the number of FLOPs grows O(n^2) when increasing the size of a systolic array, while the number of inputs/outputs as O(n).

    1. https://www.amd.com/content/dam/amd/en/documents/radeon-tech...

    2. https://semianalysis.com/2025/06/23/nvidia-tensor-core-evolu...

    • It's misleading to compare a desktop GPU against a data center GPU on these metrics. Blackwell data center tenor cores are different from Blackwell consumer tensor cores, and same for the AMD side.

      Also, the size of the native / atomic matrix fragment size isn't relevant for memory bandwidth because you can always build larger matrices out of multiple fragments in the register file. A single matrix fragment is read from memory once and used in multiple matmul instructions, which has the same effect on memory bandwidth as using a single larger matmul instruction.