Show HN: Terminal dashboard that throttles my PC during peak electricity rates
2 days ago (naveen.ing)
WattWise is a CLI tool that monitors my workstation’s power draw using a smart plug and automatically throttles the CPU & GPUs during expensive Time-of-Use electricity periods. Built with Python, uses PID controllers for smooth transitions between power states. Works with TP-Link Kasa plugs and Home Assistant.
Quick update: Definitely wasn't expecting this to end up on the front page. I was more focused on publishing the dashboard than the power optimizer service I'm running. I'll take all the feedback into account and will open source an improved version of it soon. Appreciate all the comments!
That's quite a beefy workstation you got there!
Had a quick look through the code but I can't find where he actually throttles the PC. Anyone can point me to it?
https://github.com/naveenkul/WattWise
Yeah I don’t see anything that even suggests it throttles the PC.
Looks like it’s just a display.
You mean like the title of the submission suggestion of it throttling?
1 reply →
Sorry, I only open sourced the dashboard part as mentioned in the bottom of the blogpost. Still working on improving the 'Power optimizer' service so will open source that soon as well.
If it were up to me I would go for switching complete performance profiles through something like tuned-adm rather than trying to change just cpu frequencies. There's too many interlinked things that can have an effect on throughput efficiency.
1 reply →
If your computer is still doing bursty jobs during that period, it will use less power but still as much energy. Sure, you can reduce the power but if you aren't also reducing what you ask it to do, it'll just use that max amount of allowed power for a longer period of time.
All the modern CPUs will boost into high clockspeeds and voltage to get work done quicker but at considerably higher power draws per operation. On that side of the equation its clear that it uses more energy. The problem is the entire CPU package is on longer if you don't do that and this costs power too and so its a trade off between the two. Generally we consider there isn't much difference between them but I don't know about that having seen the insanity that was the 13th and 14th gen Intel's consuming 250W when 120W gets about 95% the performance I think its very likely moving down to power save and avoiding that level of boosting definitely saves small amounts of power.
This is some pretty old analysis, but I remember when smartphones came out and people were thinking about throttling their applications to lower power consumption the general advice was to just "race to idle".
The consensus thus was that spending more time in lower power states (where you use ~0W) was much more efficient than spending a longer amount of time in the CPU sweetspot, but with all sort of peripherals online that you didn't need anyway.
I remember when Google made a big deal out of "bundling" idle CPU and network requests, since bursting them out was more efficient than having the radio and CPU trotting along at low bandwidth.
4 replies →
It is well known in the PC hardware enthusiast community that the last few digits of percent of performance come at enormous increases in power consumption as voltages are raised to prevent errors as clock speeds go up.
Manufacturers chase benchmark results by youtubers and magazines. Even a few percent difference in framerate means the difference between everyone telling each other to buy a particular motherboard, processor, or graphics card over another.
Amusingly, you often get better performance by undervolting and lowering the processor's power limits. This keeps temperatures low and thus you don't end up with the PC equivalent of the "toyota supra horsepower chart" meme.
1400W for a desktop PC is...crazy. That's a threadripper processor plus a bleeding edge top of the line GPU, assuming that's not just them reading off the max power draw on the nameplate of the PSU.
If their PC is actually using that much power, they could save far more money, CO2, etc by undervolting both the CPU and GPU.
1400 is definitely the sticker on the side of the PSU. There is some theory behind keeping your PSU at 30-50% for optimal efficiency, but considering the cost of these 1k+ W units You're probably better off right-sizing it.
1 reply →
I myself massively overspec my PSU's for my builds as I want ti keep them in the optimal efficiency range rather than pushing their limits. For a typical 800W budget I usually go with a tier1 1200W offering.
I'm actually using a 1600W PSU. 1400W is my target max draw. This is a dual EPYC (64 core CPU each) system btw. The max draw by the CPU+MB+Drives running at peak 3700MHz without the GPU is 495W! Adding 4x 4090 (underclocked) will quickly get you to 1400W+.
Technology Connections just did a timely video on the very topic of power vs energy.
https://youtu.be/OOK5xkFijPc?si=Uya3fI5oy_JFfSqI
As with everything, it depends. If you are going to do the same jobs regardless of the amount of time it takes, then yeah, dropping the max power probably just spreads the energy use over time. That doesn't usually help you save money, unless you have a very interesting residential plan.
OTOH, if it's something like realtime game rendering without a frame limiter, throttling would reduce the frame rate, reducing the total amount of work done, and most likely the total energy expended.
Pretty neat! I’m currently working on a project that uses an ESP-C6 that just exposes a “switch” over matter/thread thats based off the results from the Spanish electricity prices API. The idea is have the switch be on when it’s one of the cheapest hours of the day, and off otherwise. Then other automations can be based on it. This was pretty trivial to do in home assistant but I want something that’s ultra low power and can just be completely independent of anything for less technical users. My end goal is to have a small battery powered device that wakes up from deep sleep once a day to check the day ahead prices via WiFi. The C6 might be overkill for this, but once I have a proof of concept working I’ll try and pick something that’s ultra low super ultra low power. Something that needs charging once or twice a year would be ideal.
The ideal form factor might be a smart plug itself, but I can’t find any with hackable firmware and also matter/thread/wifi.
That's actually pretty cool. ESPs are awesome little things.
Nice project, but would it not be more rational to have your system running underclocked/undervolted at the optimal perf/watt at all times, with an optional boost to max performance for a time critical task? Running it away from the optimum might save on instant consumption but increase your aggregate consumption.
Bring back the "turbo" button on the front of the PC.
Thanks! That's an excellent point. You're right that there's likely a sweet spot that would be more efficient overall than aggressive throttling.
The current implementation uniformly sets max frequency for all 128 cores, but I'm working on per-core frequency control that would allow much more granular optimization. I'll definitely measure aggregate consumption with your suggestion versus my current implementation to see the difference.
Zooming out, 80-90% of a computer's lifecycle energy use is during manufacturing, not pulled from the wall during operation.[1] To optimize lifetime energy efficiency, it probably pushes toward extending hardware longevity (within reason, until breakeven) and maximizing compute utilization.
Ideally these goal are balanced (in some 'efficient' way) against matching electricity prices. It's not either/or, you want to do both.
Besides better amortizing the embodied energy, improving compute utilization could also mean increasing the quality of the compute workloads, ie doing tasks with high external benefits.
Love this project! Thanks for sharing.
[1] https://forums.anandtech.com/threads/embodied-energy-in-comp...
Please go learn about modern Ryzen power and performance management, namely Precision Boost Overdrive and Curve Optimizer - and how to undervolt an AM4/AM5 processor.
The stuff the chip and motherboard do, completely built-in, is light-years ahead of what you're doing. Your power-saving techniques (capping max frequency) are more than a decade out of date.
You'll get better performance and power savings to boot.
2 replies →
Another suggestion: when you want to save power, use irq affinity with /proc/irq/$irq/smp_affinity_list to put them all on one core.
This core will get to sleep less than the others.
You can also use the CPU "geometry" (which cores share cache) to set max frequency on its neighboring cores first, before recruiting the other cores
1 reply →
It's well established that completing the same task more slowly at a lower clock rate is actually less energy-efficient.
Right, "race to idle"
How is it with modern overclocked by default cpus? If you cut power use by 50% you still get 80% of the performance?
It's usually more energy-efficient to finish a task quickly with a higher power draw, also known as race-to-idle.
Good point. I'm often running multiple parallel jobs with varying priorities where uniform throttling actually makes sense. Many LLM inference tasks are long-running but not fully utilizing hardware (often waiting on I/O or running at partial capacity)
The dual Epyc CPUs (128 cores) in my setup have a relatively high idle power draw compared to consumer chips. Even when "idle" they're consuming significant power maintaining all those cores and I/O capabilities. By implementing uniform throttling when utilization is low, the automation actually reduces the baseline power consumption by a decent amount without much performance hit.
It seems it may be relatively accessible to take a few representative tasks and actually measure the soup-to-nuts energy consumed at the plug. Would be very interesting to see in tandem with the power optimizations!
1 reply →
Within the next year or two, I'm going to look at implementing something similar at my work.
We don't pay for electricity directly (it's included in the rackspace rental), but we could reduce our carbon footprint by adjusting the timing of batch processing, perhaps based on the carbon intensity APIs from https://app.electricitymaps.com/
Though, the first step will be to quantify the savings. I have the impression from being in the datacentre while batch jobs have started that they cause a significant increase in power use, but no numbers.
You’re probably already on top of it but if your company doesn't operate the datacenter you’ll also want to estimate the carbon cost of cooling in addition to the electricity that the machines consume.
Can you run the batch processing on other machines at off-peak hours?
People have made valid criticisms about the basic effectiveness of your strategy. But in any case, this is a pretty awesome hacker project - nicely done! Love the appearance of your CLI tool. I am definitely bookmarking for future inspo
Thanks! I initially just wanted to build a dashboard, with the power optimization part being a later addition. Based on the HN response, it seems that's the feature that resonated most with people. I'll be making improvements to the optimization component in the coming days and will publish what I have.
Wonder if a big UPS/power bank would be better? Charge it during periods where power is cheaper, and utilize it when power is more expensive. Then again if you do not need full performance all the time - this is a cool solution.
Definitely, I've been contemplating getting a 5-10kWh LFP battery backup with <10ms UPS switchover to run the workstation and home backup. This is an intermediate solution until then.
Why all this instead of a simple cronjob switching from performance to powersave profiles depeding on current time (=electricity price)?
A cronjob would definitely work in most cases if the goal is just to auto change freq profiles during set ToU periods. I just wanted a more flexible system where the system can auto change the profiles based on actual utilization so demanding tasks aren't slowed down.
I'm on a time of use rate plan, most expensive from 11am-7pm. However they also have "Critical Peak Events" which increase the rate about 10x to over a $1/kwh that last up to 4 hours. Just saying it would be a bit more complex then just checking the time.
So how do you get that data (status) now (if(is-critical-peak-event){})? Do the smartplugs gather some smartgrid-style data?
2 replies →
From what I’ve seen price per token make home generation uncompetitive in most countries. And that’s just on elec - never mind cost of gear
Only really makes sense for learning or super confidential info
Could you share how much you have saved in $?
The power optimizer daemon has only been running for a few days, so it's hard to measure in $ value but based on my peak pricing I would estimate the savings to be around a few dollars since then.
This looks cool but I feel it should notify the user with a snip from the song "You Suffer" by "Napalm Death" when throttling occurs.
[dead]