Comment by qnleigh
1 day ago
> That will help save enormous amounts of power: up to 48 percent on a single charge,
Why does refresh rate have such a large impact on power consumption? I understand that the control electronics are 60x more active at 60 Hz than 1 Hz, but shouldn't the light emission itself be the dominant source of power consumption by far?
I used to be a display architect about 15 years back (for Qualcomm mirasol, et al), so my knowledge of the specifics / numbers is outdated. Sharing what I know.
High pixel density displays have disproportionately higher display refresh power (not just proportional to the total number of pixels as the column lines capacitances need to be driven again for writing each row of pixels). This was an important concern as high pixel densities were coming along.
Display needs fast refreshing not just because pixel would lose charge, but because the a refresh can be visible or result in flicker. Some pixels tech require flipping polarity on each refresh but the curves are not exactly symmetric between polarities, and further, this can vary across the panel. A fast enough refresh hides the mismatch.
Since you are knowledgable about this, do you have any idea what happened to Mirasol technology? I was fascinated by those colour e-paper like displays, and disappointed when plans to manufacture it was shelved. Then I learnt Apple purchased it but it looks more like a patent padding purchase than for tech development as nothing has come out of it form Apple too. Is it in some way still being developed or parts of its research tech being used in display development?
Being a key technology architect for it (not the core inventor), I know all about it, and then some more!
I cannot however talk publicly about it. :-(
It has been a disappointment for me as well. I had worked on it for nearly eight years. The idea was so interesting--using thin-film interference for creating images is akin to shaping Newton's rings into arbitrary images, something which even Newton would not have imagined! The demos and comparisons we had shown to various industry leaders and sometimes publicly were often instantly compelling. The people/engineers in the team were mostly the best I have ever worked with, and with whom I still maintain a great connection. But unfortunately, there were problems (not saying how much tech how much people) that were recognized by some but never got (timely) addressed. And a tech like it does not exist till date.
I do not think anything on it is being developed further.
The earliest of the patents would have expired by now.
Liquavista, Pixtronics, etc., have been alternative display technologies that also ultimately didn't make the impact desired, AFAIK.
Meanwhile, LCDs developed high pixel densities (which led to pressures on mirasol tech too), Plasma got sidelined. EInk displays have since then made good progress, though, in my opinion, are still far from colors and speeds that mirasol had. And of course, OLED, Quantum dots, ...
5 replies →
What's interesting about these newer 1Hz claims is that they're basically trying to sidestep the exact problems you mention
Correct.
I myself have been privy to similar R&D going on for more than a decade.
> the column lines capacitances need to be driven again for writing each row of pixels
Not my field so please forgive a possibly obvious question: That seems true regardless of the pixel count (?), so for that process why wouldn't power also be proportional to the pixel count?
I notice I'm saying 'pixel count' and you are saying 'pixel density'; does it have something to do with their proximity to each other?
Total column line capacitance is impacted by the number of pixels hanging onto it as each transistor (going to the pixel capacitance) adds some parasitic capacitance of its own. Hope that answers your question. You are right in the sense that a part of the total column capacitance would depend on just the length and width of it, irrespective of the number of pixels hanging onto it.
I had back then developed what was perhaps the most sophisticated system-level model for display power, including refresh, illumination, etc., and it included all those terms for capacitance, a simplified transistor model, pixel model, etc.
I did not carefully distinguish pixel density vs. pixel count while writing my previous comments here, just to keep it simple. You can perhaps imagine that increasing display size without changing pixel count can lead to higher active pixel area percentage, which in turn would lead to better light generation/transmission/reflection efficiency. There are multiple initially counter-intuitive couplings like that. So it ultimately comes down to mathematical modeling, and the scaling laws / derivatives depend on the actual numbers chosen.
Addition:
Another important point -- Column line capacitances do not necessarily need full refresh going from one row of the pixels to the next, as the image would typically have vertical correlations. Not mentioning this is another simplification I made in my previous comments. My detailed power model included this as well -- so it could calculate energy spent for writing a specific image, a random image, a statistically typical image, etc.
There's definitely a few reasons but one of them is that you have to ask the GPU to do ~60x less work when you render 60x less frames
PSR (panel self-refresh) lets you send a single frame from software and tell the display to keep using that.
You don’t need to render 60 times the same frame in software just to keep that visible on screen.
How often is that used? Is there a way to check?
With the amount of bullshit animations all OSes come with these days, enabled by default, and most applications being webapp with their own secondary layer of animations, and with the typical developer's near-zero familiarity with how floating point numbers behave, I imagine there's nearly always some animation somewhere, almost but not quite eased to a stop, that's making subtle color changes across some chunk of the screen - not enough to notice, enough to change some pixel values several times per second.
I wonder what existing mitigations are at play to prevent redisplay churn? It probably wouldn't matter on Windows today, but will matter with those low-refresh-rate screens.
4 replies →
Why? Surely copying the same pixels out sixty times doesn't take that much power?
The PCWorld story is trash and completely omits the key point of the new display technology, which is right in the name: "Oxide." LG has a new low-leakage thin-film transistor[1] for the display backplane.
Simply, this means each pixel can hold its state longer between refreshes. So, the panel can safely drop its refresh rate to 1Hz on static content without losing the image.
Yes, even "copying the same pixels" costs substantial power. There are millions of pixels with many bits each. The frame buffer has to be clocked, data latched onto buses, SERDES'ed over high-speed links to the panel drivers, and used to drive the pixels, all while making heat fighting reactance and resistance of various conductors. Dropping the entire chain to 1Hz is meaningful power savings.
[1] https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...
3 replies →
Copying , Draw() is called 60 times a second .
9 replies →
I think the idea is that in an always-on display mode, most of the screen is black and the rest is dim, so circuitry power budget becomes a much larger fraction of overhead.
Ohh like property tax on a vacant building
Really disappointing to only learn this after a decade, but on Linux changing from 60hz to 40hz decreased my power draw by 40% in the last hour since reading this comment.
I interpreted that bit as E2E system uptime being up by 48%. Sounds more plausible to me, as there'd fewer video frames that would need to be produced and pushed out.
Your GPU rendering 1 frame vs your GPU rendering 60 frames.
This is an OLED display, so I don't think the control electronics are actually any less active. (They would be for LCD, which is where most of these low-refresh-rate optimizations make sense.)
The connection between the GPU and the display has been run length encoded (or better) since forever, since that reduces the amount of energy used to send the next frame to the display controller. Maybe by "1Hz" they mean they also only send diffs between frames? That'd be a bigger win than "1Hz" for most use cases.
But, to answer your question, the light emission and computation of the frames (which can be skipped for idle screen regions, regardless of frame rate) should dwarf the transmission cost of sending the frame from the GPU to the panel.
The more I think about this, the less sense it makes. (The next step in my analysis would involve computing the wattage requirements of the CPU, GPU and light emission, then comparing that to the KWh of the laptop battery + advertised battery life.
Not OLED.
> LG Display is also preparing to begin mass production of a 1Hz OLED panel incorporating the same technology in 2027.
> This is an OLED display
The LG press release states that it's LCD/TFT.
https://news.lgdisplay.com/en/2026/03/lg-display-becomes-wor...
> The more I think about this, the less sense it makes
And yet, it’s the fundamental technology enabling always on phone and smartwatch displays
The intent of this is to reduce the time that the CPU, GPU, and display controller is in an active state (as well as small reductions in power of components in between those stages).
for small screen sizes and low information density displays, like a watch that updates every second this makes a lot of sense
it would make a lot of sense in situations where the average light generating energy is substantially smaller:
pretend you are a single pixel on a screen (laptop, TV) which emits photons in a large cone of steradians, of which a viewer's pupil makes up a tiny pencil ray; 99.99% of the light just misses an observer's pupils. in this case this technology seems to offer few benefits, since the energy consumed by the link (generating a clock and transmitting data over wires) is dwarfed by the energy consumed in generating all this light (which mostly misses human eye pupils)!
Now consider smart glasses / HUD's; the display designer knows the approximate position of the viewer's eyes. The optical train can be designed so that a significantly larger fraction of generated photons arrive on the retina. Indeed XReal or NReal's line of smart glasses consume about 0.5 W! In such a scenario the links energy consumption becomes a sizable proportion of the energy consumption; hence having a low energy state that still presents content but updates less frequently makes sense.
One would have expected smart glasses to already outcompete smartphones and laptops, just by prolonged battery life, or conversely, splitting the difference in energy saved, one could keep half of the energy saved (doubling battery life) while allocating the other half of the energy for more intensive calculations (GPU, CPU etc.).
[dead]
[dead]
Before OLED (and similar), most displays were lit with LEDs (behind or around the screen, through a diffuser, then through liquid crystals) which was indeed the dominant power draw... like 90% or so!
But the article is about an OLED display, so the pixels themselves are emitting light.
> But the article is about an OLED display
The article is about an LCD display, actually.
I just wish "we" wouldn't have discarded the option to use pure black for dark modes in favor of a seemingly ever-brightening blue-grey...
It doesn't. They take extreme use cases such as watching video until the battery depletes at maximum brightness where 90% of power consumption is the display. But in realistic use cases the fraction of power draw consumed by the display is much smaller when the CPU is actually doing things.
For whatever reason I keep catching my macbook on max brightness. Maybe not an unrealistic test.