The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.
Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.
Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.
Sharp MIP makes every pixel an SRAM bit: near-zero current and no refresh necessary. The full color moral equivalent of Sharp MIP would be 3 DACs per pixel. TFT (à la LG Oxide) is closer to DRAM, except the charge level isn't just high/low.
So, no, there is a meaningful difference in the nature of the circuits.
Xdamage isn’t a thing if you’re using a compositor for what it’s worth. It’s more expensive to try to incrementally render than to just render the entire scene (for a GPU anyway).
And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.
It was, but xdamage is part of the composting side of the final bitmap image generation, before that final bitmap is clocked out to the display.
The frame buffer, at least the portion of the GPU responsible for reading the frame buffer and shipping the contents out over the port to the display, the communications cable to the display screen itself, and the display screen were still reading, transmitting, and refreshing every pixel of the display at 60hz (or more).
This LG display tech. claims to be able to turn that last portion's speed down to a 1Hz rate from whatever it usually is running at.
The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.
Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.
Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.
[1] https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...
So it's a Sharp MIP scaled up? https://sharpdevices.com/memory-lcd/
Sharp MIP makes every pixel an SRAM bit: near-zero current and no refresh necessary. The full color moral equivalent of Sharp MIP would be 3 DACs per pixel. TFT (à la LG Oxide) is closer to DRAM, except the charge level isn't just high/low.
So, no, there is a meaningful difference in the nature of the circuits.
Thanks. Great explanation.
Copying , Draw() is called 60 times a second .
It isn't for any reasonable UI stack. For instance, the xdamage X11 extension for this was released over 20 years ago. I doubt it was the first.
Xdamage isn’t a thing if you’re using a compositor for what it’s worth. It’s more expensive to try to incrementally render than to just render the entire scene (for a GPU anyway).
And regardless, the HW path still involves copying the entire frame buffer - it’s literally in the name.
1 reply →
At the software level yes, but it seems nobody has taken the time to do this at the hardware level as well. This is LG's stab at it.
2 replies →
It was, but xdamage is part of the composting side of the final bitmap image generation, before that final bitmap is clocked out to the display.
The frame buffer, at least the portion of the GPU responsible for reading the frame buffer and shipping the contents out over the port to the display, the communications cable to the display screen itself, and the display screen were still reading, transmitting, and refreshing every pixel of the display at 60hz (or more).
This LG display tech. claims to be able to turn that last portion's speed down to a 1Hz rate from whatever it usually is running at.
What’s your metal model of what happens when a dirty region is updated and now we need to get that buffer on the display?
You forget that all modern UI toolkits brag about who has the highest frame rate, instead of updating only what's changed and only when it changes.