Why is my CPU usage always 100%?

9 days ago (downtowndougbrown.com)

An old manager of mine once spent the day trying to kill a process that was running at 99% on Windows box.

When I finally got round to see what he was doing I was disappointed to find he was attempting to kill the 'system idle' process.

  • Years ago I worked for a company that provided managed hosting services. That included some level of alarm watching for customers.

    We used to rotate the "person of contact" (POC) each shift, and they were responsible for reaching out to customers, and doing initial ticket triage.

    One customer kept having a CPU usage alarm go off on their Windows instances not long after midnight. The overnight POC reached out to the customer to let them know that they had investigated and noticed that "system idle processes" were taking up 99% of CPU time and the customer should probably investigate, and then closed the ticket.

    I saw the ticket within a minute or two of it reopening as the customer responded with a barely diplomatic message to the tune of "WTF". I picked up that ticket, and within 2 minutes had figured out the high CPU alarm was being caused by the backup service we provided, apologised to the customer and had that ticket closed... but not before someone not in the team saw the ticket and started sharing it around.

    I would love to say that particular support staff never lived that incident down, but sadly that particular incident was par for the course with them, and the team spent inordinate amount of time doing damage control with customers.

    • In the 90s I worked for a retail chain where the CIO proposed to spend millions to upgrade the point-of-sale hardware. The old hardware was only a year old, but the CPU was pegged at 100% on every device and scanning barcodes was very sluggish.

      He justified the capex by saying if cashiers could scan products faster, customers would spend less time in line and sales would go up.

      A little digging showed that the CIO wrote the point-of-sale software himself in an ancient version of Visual Basic.

      I didn't know VB, but it didn't take long to find the loops that do nothing except count to large numbers to soak up CPU cycles since VB didn't have a sleep() function.

      7 replies →

  • That's what managers do.

    Silly idle process.

    If you've got time for leanin', you've got time for cleanin'

  • I abandonned Windows 8 for linux because of an bug (?) where my HDD was showing it was 99% busy all the time. I had removed every startup program that could be and analysed thouroughly for any viruses, to no avail. Had no debugging skills at the time and wasn't sure the hardware could stand windows 10. That's how linux got me.

    • Recent Linux distributions are quickly catching up to Windows and macOS. Do a fresh install of your favorite distribution and then use 'ps' to look at what's running. Dozens of processes doing who knows what? They're probably not pegging your CPU at 100%, which is good, but it seems that gone are the days when you could turn on your computer and it was truly idle until you commanded it to actually do something. That's a special use case now, I suppose.

      9 replies →

    • Why is this such a huge issue if it merely shows it's busy, but the performance of it indicates that it actually isn't? Switching to Linux can be a good choice for a lot of people, the reason just seems a bit odd here. Maybe it was simply the straw that broke the camel's back.

      2 replies →

    • I had this happen with an nvme drive. Tried changing just about every setting that affected the slot.

      Everything worked fine on my Linux install ootb

    • Windows 8/8.1/10 had an issue for a while where when it was run on spinning rust HDD it would peg it out and slow the system to a crawl.

      The only solution was to swap over to a SSD.

  • To be fair, it is a really poorly named "process". The computer equivalent of the "everything's ok" alarm.

    • Long enough ago (win95 era) it wasn't part of Windows to sleep the CPU when there was no work to be done. It always assigned some task to the CPU. The system idle process was a way to do this that played nicely with all of the other process management systems. I don't remember when they finally added CPU power management. SP3? Win98? Win98SE? Eh, it was somewhere in there.

      2 replies →

  • reminds of when i was a kid and noticed a virus had taken over a registry. from that point forward i attempted to delete every single registry file, not quite understanding. Between that and excessive bad website viewing, I dunno how i ever managed to not brick my operating system, unlike my grandma who seemed to brick her desktop in a timely fashion before each of the many monthly visits to her place.

  • I worked at a government site with a government machine at one time. I had an issue, so I took it to the IT desk. They were able to get that sorted, but then said I had another issue. "Your CPU is running at 100% all the time, because some sort of unkillable process is consuming all your cpu".

    Yep, that was "System Idle" that was doing it. They had the best people.

  • I wonder if you make a process with idle in it you could end up in the reverse track where users ignore it. Is there anything preventing an executable being named System Idle.

  • You're keeping us in suspense. Did he ever manage to kill the System Idle process?

  • Windows used to have that habit of making the processes CPU starved, and yet claiming the CPU was idle all the time.

    Since the Microsoft response to the bug was denying and gaslighting the affected people, we can't tell for sure what caused it. But several people were in a situation where their computer couldn't finish any work, and the task-manager claimed all of the CPU time was spent on that line item.

    • As a former Windows OS engineer, based on the short statement here, my assumption would be that your programs are IO-bound, not CPU-bound, and that the next step would be to gather data (using a profiler) to investigate the bottlenecks. This is something any Win32 developer should learn how to do.

      Although I can understand how "Please provide data to demonstrate that this is an OS scheduling issue since app bottlenecks are much more likely in our experience" could come across as "denying and gaslighting" to less experienced engineers and layfolk

      1 reply →

    • > Since the Microsoft response to the bug was denying and gaslighting the affected people

      Well. I wouldn't go that far. Any busy dev team is incentivized to make you run the gauntlet:

      1. It's not an issue (you have to prove to me it's an issue)

      2. It's not my issue (you have to prove to me it's my issue)

      3. It's not that important (you have to prove it has significant business value to fix it)

      4. It's not that time sensitive (you have to prove it's worth fixing soon)

      It was exactly like this at my last few companies. Microsoft is quite a lot like this as well.

      If you have an assigned CSAM, they can help run the gauntlet. That's what they are there for.

      See also: The 6 stages of developer realization:

      https://www.amazon.com/Panvola-Debugging-Computer-Programmer...

      11 replies →

    • I've never heard of this. How do you know it's windows "gaslighting" users, and not something dumb like thermal throttling or page faults?

      2 replies →

It doesn't feel like reading 4 times is necessarily a portable solution, if there will be more versions at different speeds and different I/O architectures; or how this will work under more load, and whether the original change was done to fix some other performance problem OP is not aware of, but not sure what else can be done. Unfortunately many vendors like Marvell can seriously under-document crucial features like this. If anything it would be good to put some of this info in the comment itself, not very elegant but how else practically are we meant to keep track of this, is the mailing list part of the documentation?

Doesn't look like there's a lot of discussion on the mailing list, but I don't know if I'm reading the thread view correctly.

  • This is a workaround for a hardware bug of a certain CPU.

    Therefore it cannot really be portable, because other timers in other devices will have different memory maps and different commands for reading.

    The fault is with the designers of these timers, who have failed to provide a reliable way to read their value.

    It in hard to believe that this still happens in this century, because reading correct values despite the fact that the timer is incremented or decremented continuously is an essential goal in the design of any timer that may be read, and how to do it has been well known for more than 3 quarters of century.

    The only way to make such a workaround somewhat portable is to parametrize it, e.g. with the number of retries for direct reading or with the delay time when reading the auxiliary register. This may be portable between different revisions of the same buggy timer, but the buggy timers in other unrelated CPU designs will need different workarounds anyway.

    • > This is a workaround for a hardware bug of a certain CPU.

      What about different variants, revisions, and speeds of this CPU?

  • The related part of doc has one more note "This request requires up to three timer clock cycles. If the selected timer is working at slow clock, the request could take longer." From the way doc is formatted it's not fully clear what "this request" refers to. It might explain where 3-5 attempts come from, and that it might not be pulled completely out of thin air. But the part about taking up to but sometimes more clock cycles makes it impossible to have a "proper" solution without guesswork or further clarifications from vendor.

    "working at slow clock" part, might explain why some other implementations had different code path for 32.768 KHz clocks. According to docs there are two available clock sources "Fast clock" and "32768 Hz" which could mean that "slow clock" refers to specific hardware functionality is not just a vague phrase.

    As for portability concerns, this is already low level hardware specific register access. If Marvell releases new SOC not only there is no assurance that will require same timing, it might was well have different set of registers which require completely different read and setup procedure not just different timing.

    One thing that slightly confuses me - the old implementation had 100 cycles of "cpu_relax()" which is unrelated to specific timer clock, but neither is reading of TMR_CVWR register. Since 3-5 of cycles of that worked better than 100 cycles of cpu_relex, it clearly takes more time unless cpu_relax part got completely optimized out. At least I didn't find any references mentioning that timer clock affects read time of TMR_CVWR.

    • It sounds like this is an old CPU(?), so no need to worry about the future here.

      > I didn't find any references mentioning that timer clock affects read time of TMR_CVWR.

      Reading the register might be related to the timer's internal clock, as it would have to wait for the timer's bus to respond. This is essentially implied if Marvell recommend re-reading this register, or if their reference implementation did so. My main complaint is it's all guesswork, because Marvell's docs aren't that good.

      1 reply →

  • I also wondered about this, but there's a crucial differnce, no idea if it matters: in that loop it reads the register, so the register is read at least 4 times.

In the late 1990's I worked in a company that had a couple mainframes in their fleet and once I looked into a resource usage screen (Omegamon, perhaps? Is it that old?) and noticed the CPU was pegged at 100%. I asked the operator if that was normal. His answer was "Of course. We paid for that CPU, might as well use it". Funny though that mainframes are designed for that - most, if not all, non-application work is offloaded to other processors in the system so that the CPU can run applications as fast as it can.

  • Having a number of running processes take the CPU usage to 100% is one thing, have an under utilised CPU with almost no processes running report that usage is at 100% is another thing, the subject of the article here.

    • I didn't intend this as an example of the issue the article mentions (a misreporting of usage because of a hardware design issue). It was just a fun example of how different hardware behaves differently.

      One can also say Omegamon (or whatever tool) was misreporting, because it didn't account for the processor time of the various supporting systems that dealt with peripheral operations. After all, they also paid for the disk controllers, disks, tape drives, terminal controllers and so on, so they could want to drive those to close to 100% as well.

      1 reply →

    • Some mainframes have the ability to lock clock speed and always run at exactly 100%, so you can often have hard guarantees about program latency and performance.

This is a wonderful write-up and a very enjoyable read. Although my knowledge about systems programming on ARM is limited, I know that it isn't easy to read hardware-based time counters; at the very least, it's not as simple as the x86 rdtsc [1]. This is probably why the author writes:

> This code is more complicated than what I expected to see. I was thinking it would just be a simple register read. Instead, it has to write a 1 to the register, and then delay for a while, and then read back the same register. There was also a very noticeable FIXME in the comment for the function, which definitely raised a red flag in my mind.

Regardless, this was a very nice read and I'm glad they got down to the issue and the problem fixed.

[1]: https://www.felixcloutier.com/x86/rdtsc.

  • Bear in mind that the blog post is about a 32 bit SoC that's over a decade old, and the timer it is reading is specific to that CPU implementation. In the intervening time both timers and performance counters have been architecturally standardised, so on a modern CPU there is a register roughly equivalent to the one x86 rdtsc uses and which you can just read; and kernels can use the generic timer code for timers and don't need to have board specific functions to do it.

    But yeah, nice writeup of the kinds of problem you can run into in embedded systems programming.

Curiously, instead of "set capture reg, wait for clock edge, read", the "read reg twice, until same result is obtained" approach is ignored. This is strange as it is usually much faster - reading a 3.25MHz counter at 200MHz+ twice is very likely to see the same value twice. For a 32KHz counter, it is basically guaranteed.

   u32 val;
   do {
       val = readl(...);
   } while (val != readl(...));

   return val;

compiles to a nice 6-instr little function on arm/thumb too, with no delays

   readclock:
     LDR  R2, =...
   1:
     LDR  R0, [R2]
     LDR  R1, [R2]
     CMP  R0, R1
     BNE  1b
     BX   LR

My recurring issue (on a variety of laptops, both Linux and Windows): the fans will start going full-blast, everything slows down, then as soon as I open a task manager CPU usage drops from 100% to something negligible.

  • You my friend, most likely have mining malware on your systems. They’ll shutdown when they detect task manager is opened so you don’t notice them.

    • That was my thought too; one way to get another data point is to just run the task manager as soon as you boot and let it stay there. If the fan behavior NEVER comes back while doing that, another point in the "mining malware" favor (though of course, not definitive).

      Though he did say a VAREITY of laptops, both Windows and Linux. Can someone be _that_ unlucky?

      3 replies →

Aside from the technical beauty of this post, what is the practical impact of this?

Fan speeds should ideally be looking at temperature sensors, CPU idling is working albeit with interrupt waits as pointed out here. The only impact seems to be surprise that the CPU is working harder than it really is when looking at this number.

It's far better to look at the system load (which was 0.0 - already a strong hint this system is working below capacity). It has a formal definition (average waiting cpu task queue depth over 1, 5, 10 minutes) and succinctly captures the concept of "this machine is under load".

Many years ago, a coworker deployed a bad auditd config. CPU usage was below 10%, but system load was 20x the number of cores. We moved all our alerts to system load and used that instead.

I don't get the fix.

Why reading it multiple times will fix the issue?

Is it just because reading takes time, therefore reading multiple time makes the needed time from writing to reading passes? If so, it sounds like a worse solution than just extending waiting delay longer like the author did initially.

If not, then I would like to know the reason.

(Needless to say, a great article!)

  • The article says that the buggy timer has 2 different methods for reading.

    When reading directly, the value may be completely wrong, because the timer is incremented continuously and the updating of its bits is not synchronous with the reading signal. Therefore any bit in the value that is read may be wrong, because it has been read exactly during a transition between valid values.

    The workaround in this case is to read multiple times and accept as good a value that is approximately the same for multiple reads. The more significant bits of the timer value change much less frequently than the least significant bits, so at most attempts of reading, only a few bits can be wrong. Only seldom the read value can be complete garbage, when comparing it with the other read values will reject it.

    The second reading method was to use a separate capture register. After giving a timer capture command, reading an unchanging value from the capture register should have caused no problems. Except that in this buggy timer, it is unpredictable when the capture is actually completed. This requires the insertion of an empirically determined delay time before reading the capture register, hopefully allowing enough time for the capture to be complete.

    • > The workaround in this case is to read multiple times and accept as good a value that is approximately the same for multiple reads.

      It's only incrementing at 3.25MHz, right? Shouldn't you be able to get exactly the same value for multiple reads? That seems both simpler and faster than using this very slow capture register, but maybe I'm missing something.

      1 reply →

  • Author here. Thanks! I believe the register reads are just extending the delay, although the new approach does have a side effect of reading from the hardware multiple times. I don't think the multiple reads really matter though.

    I went with the multiple reads because that's what Marvell's own kernel fork does. My reasoning was that people have been using their fork, not only on the PXA168, but on the newer PXAxxxx series, so it would be best to retain Marvell's approach. I could have just increased the delay loop, but I didn't have any way of knowing if the delay I chose would be correct on newer PXAxxx models as well, like the chip used in the OLPC. Really wish they had more/better documentation!

  • It's possible that actually reading the register takes (significantly) more time than an empty countdown loop. A somewhat extreme example of that would be on x86, where accessing legacy I/O ports for e.g. the timer goes through a much lower-clocked emulated ISA bus.

    However, a more likely explanation is the use of "volatile" (which only appears in the working version of the code). Without it, the compiler might even have completely removed the loop?

    • > However, a more likely explanation is the use of "volatile" (which only appears in the working version of the code). Without it, the compiler might even have completely removed the loop?

      No, because the loop calls cpu_relax(), which is a compiler barrier. It cannot be optimized away.

      And yes, reading via the memory bus is much, much slower than a barrier. It's absolutely likely that reading 4 times from main memory on such an old embedded system takes several hundred cycles.

      5 replies →

  • Karliss above found docs which mention:

    > This request requires up to three timer clock cycles. If the selected timer is working at slow clock, the request could take longer.

    Let's ignore the weirdly ambiguous second sentence and say for pedagogical purposes it takes up to three timer clock cycles full stop. Timer clock cycles aren't CPU clock cycles, so we can't just do `nop; nop; nop;`. How do we wait three timer clock cycles? Well a timer register read is handled by the timer peripheral which runs at the timer clock, so reading (or writing) a timer register will take until at least the end of the next timer clock.

    This is a very common pattern when dealing with memory mapped peripheral registers.

    ---

    I'm making some reasonable assumptions about how the clock peripheral works. I haven't actually dug into the Marvell documentation.

  • > Is it just because reading takes time, therefore reading multiple time makes the needed time from writing to reading passes?

    Yes.

    > If so, it sounds like a worse solution than just extending waiting delay longer like the author did initially.

    Yeah, it's a judgement call. Previously, the code called cpu_relax() for waiting, which is also dependent on how this is defined (can be simply NOP or barrier(), for instance). The reading of the timer register maybe has the advantage that it is dependent on the actual memory bus speed, but I wouldn't know for sure. Hardware at that level is just messy, and especially niche platforms have their fair share of bugs where you need to do ugly workarounds like these.

    What I'm rather wondering is why they didn't try the other solution that was mentioned by the manufacturer: reading the timer directly two times and compare it, until you get a stable output.

This was very well written, I somehow read every single line and didn't skip to the end. Great work too!

TIL there are still Chumby's alive in the wild. My Insignia Chumby 8 didn't last.

This was a well written article! It was nice to read the process of troubleshooting with the rabbit holes included. Glad you stuck it out!

I noticed that one time. Looked at the process list, and what was running was a program that enabled streaming. But since I wasn't streaming anything, I wondered what it was doing reading the disk drive.

So I uninstalled it.

Not having any programs that are not good citizens.

Great read! Eerily similar to some bugs I've had, but the root cause has been a compiler bug. Debugging a kernel that doesn't boot is... interesting. QEMU+GDB to the rescue.

That’s an awful lot of effort to deal with an issue that was basically just cosmetic. I suspect at some point the author was just nerd sniped though.

  • To be fair, other non-cosmetic stuff uses the CPU percentage. This same bug was preventing fast user suspend on the OLPC until they worked around it. It was also a fun challenge.

> Chumby’s kernel did a total of 5 reads of the CVWR register. The other two kernels did a total of 3 reads.

> I opted to use 4 as a middle ground

reminded me of xkcd: Standards

https://xkcd.com/927/

[flagged]

  • Just twice so far. Not even from the same account. Why so hostile?

    • Even HN boosts submissions admins deem good enough that didn't get sufficient exposure the last time they were published. And, since this one got quite a bit of engagement, it demonstrates that multiple submissions are OK. Otherwise a good article would have been lost.

  • I, for one, welcome this repost of a very interesting technical article that I had missed, and arguably did not get the audience it deserved 8 months ago.

Isn't this one of those problems that switching to linux is supposed to fix?

  • He’s on linux

    • Exactly, that's the joke. If it had been an issue on Windows the default response from folks here would be to switch to Linux instead of trying to get to the root of the issue. Guess I should have included an /s on my comment.

      1 reply →

Oops, this is not valid.

  • This feels like the often-repeated "argument" that Electron applications are fine because "unused memory is wasted memory". What Linus meant by that is that the operating system should strive to use as much of the free RAM as possible for things like file and dentry caches. Not that memory should be wasted on millions of layers of abstraction and too-high resolution images. But it's often misunderstood that way.

    • It's so annoying when that line is used to defend applications with poor memory usage, ignoring the fact that all modern OSes already put unallocated memory to use for caching.

      "Task Manager doesn't report memory usage correctly" is another B.S. excuse heard on Windows. It's actually true, but the other way around -- Task Manager underreports the memory usage of most programs.

    • Eeeh, the Electron issue is oveblown.

      These days the biggest hog of memory is the browser. Not everyone does this, but a lot of people, myself included, have tens of tabs open at a time (with tab groups and all of that)... all day. The browser is the primary reason I recommend a minimum of 16gb ram to F&F when they ask "the it guy" what computer to buy.

      When my Chrome is happily munching on many gigabytes of ram I don't think a few hundred megs taken by your average Electron app is gonna move the needle.

      The situation is a bit different on mobile, but Electron is not a mobile framework so that's not relevant.

      PS: Can I rant a bit how useless the new(ish) Chrome memory saver thing is? What is the point having tabs open if you're gonna remove them from memory and just reload on activation? In the age of fast consumer ssds I'd expect you to intelligently hibernate the tabs on disk, otherwise what you have are silly bookmarks.

      7 replies →

  • Only when your computer actually has work to do. Otherwise your CPU is just a really expensive heater.

    Modern computers are designed to idle at 0% then temporarily boost up when you have work to do. Then once the task is done, they can drop back to idle and cool down again.

    • Not that I disagree, but when exactly in modern operating systems are there moments where there are zero instructions being executed? Surely there are always processes doing background things?

      7 replies →

  • What you are probably thinking of is "race to idle". A CPU should process everything it can, as quickly it can (using all the power), and then go to an idle state, instead of processing everything slowly (potentially consuming less energy at that time) but take more time.

  • You're probably thinking about memory and caching. There are no advantages to keeping the CPU at 100% when no workload needs to be done.

  • I'm sure a few more software updates will take care of this little problem...

  • > computer architecture courses.

    I guess it was some _theoretical_ task scheduling stuff.... When you are doing task scheduling, yes, maybe, depends on what you optimize for.

    .... but this bug have nothing to do with that. This bug is about some accounting error.