Comment by begueradj

6 days ago

Oops, this is not valid.

This feels like the often-repeated "argument" that Electron applications are fine because "unused memory is wasted memory". What Linus meant by that is that the operating system should strive to use as much of the free RAM as possible for things like file and dentry caches. Not that memory should be wasted on millions of layers of abstraction and too-high resolution images. But it's often misunderstood that way.

  • It's so annoying when that line is used to defend applications with poor memory usage, ignoring the fact that all modern OSes already put unallocated memory to use for caching.

    "Task Manager doesn't report memory usage correctly" is another B.S. excuse heard on Windows. It's actually true, but the other way around -- Task Manager underreports the memory usage of most programs.

  • Eeeh, the Electron issue is oveblown.

    These days the biggest hog of memory is the browser. Not everyone does this, but a lot of people, myself included, have tens of tabs open at a time (with tab groups and all of that)... all day. The browser is the primary reason I recommend a minimum of 16gb ram to F&F when they ask "the it guy" what computer to buy.

    When my Chrome is happily munching on many gigabytes of ram I don't think a few hundred megs taken by your average Electron app is gonna move the needle.

    The situation is a bit different on mobile, but Electron is not a mobile framework so that's not relevant.

    PS: Can I rant a bit how useless the new(ish) Chrome memory saver thing is? What is the point having tabs open if you're gonna remove them from memory and just reload on activation? In the age of fast consumer ssds I'd expect you to intelligently hibernate the tabs on disk, otherwise what you have are silly bookmarks.

    • > Eeeh, the Electron issue is oveblown.

      > These days the biggest hog of memory is the browser.

      That’s the problem: Electron is another browser instance.

      > I don't think a few hundred megs taken by your average Electron app is gonna move the needle.

      Low-end machines even in 2025 still come with single-digit GB RAM sizes. A few hundred MB is a substantial portion of an 8GB RAM bank.

      Especially when it’s just waste.

      1 reply →

    • Your argument against electron being a memory hog is that chrome is a bigger one? You are aware that electron is an instance of chromium, right?

      3 replies →

    • >otherwise what you have are silly bookmarks.

      My literal several hundreds of tabs are silly bookmarks in practice.

Only when your computer actually has work to do. Otherwise your CPU is just a really expensive heater.

Modern computers are designed to idle at 0% then temporarily boost up when you have work to do. Then once the task is done, they can drop back to idle and cool down again.

  • Not that I disagree, but when exactly in modern operating systems are there moments where there are zero instructions being executed? Surely there are always processes doing background things?

    • > Timer Coalescing attempts to enforce some order on all this chaos. While on battery power, Mavericks will routinely scan all upcoming timers that apps have set and then apply a gentle nudge to line up any timers that will fire close to each other in time. This "coalescing" behavior means that the disk and CPU can awaken, perform timer-related tasks for multiple apps at once, and then return to sleep or idle for a longer period of time before the next round of timers fire.[0]

      > Specify a tolerance for the accuracy of when your timers fire. The system will use this flexibility to shift the execution of timers by small amounts of time—within their tolerances—so that multiple timers can be executed at the same time. Using this approach dramatically increases the amount of time that the processor spends idling…[1]

      [0] https://arstechnica.com/gadgets/2013/06/how-os-x-mavericks-w...

      [1] https://developer.apple.com/library/archive/documentation/Pe...

      1 reply →

    • There are a lot of such moments, but they are just short. When you're playing music, you download a bit of data from the network or the SSD/HDD by first issuing a request and then waiting (i.e. doing nothing) to get the short piece of data back. Then you decode it and upload a short bit of the sound to your sound card and then again you wait for new space to come up, before you send more data.

      One of the older ways (in x86 side) to do this was to invoke the HLT instruction https://en.wikipedia.org/wiki/HLT_(x86_instruction) : you stop the processor, and then the processor wakes up when an interrupt wakes it up. An interrupt might come from the sound card, network card, keyboard, GPU, timer (e.g. 100 times a second to schedule an another process, if some process exists that is waiting for CPU), and during the time you wait for the interrupt to happen you just do nothing, thus saving energy.

      I suspect things are more complicated in the world of multiple CPUs.

    • We’re not talking about what humans call “a moment”. For a (modern) computer, a millisecond is “a moment”, possibly even “a long moment”. It can run millions of instructions in such a time frame.

      A modern CPU also has multiple cores not all of which may be needed, and will be supported by hardware that can do lots of tasks.

      For example, sending out an audio signal isn’t typically done by the main CPU. It tells some hardware to send a buffer of data at some frequency, then prepares the next buffer, and can then sleep or do other stuff until it has to send the new buffer.

    • From human perception there will "always" be work on a "normal" system.

      However for a CPU with multiple cores, each running at 2+ GHz, there is enough room for idling while seeming active.

    • With multi-core cpus, some of them can be fully off, while others handle any background tasks.

What you are probably thinking of is "race to idle". A CPU should process everything it can, as quickly it can (using all the power), and then go to an idle state, instead of processing everything slowly (potentially consuming less energy at that time) but take more time.

You're probably thinking about memory and caching. There are no advantages to keeping the CPU at 100% when no workload needs to be done.

I'm sure a few more software updates will take care of this little problem...

> computer architecture courses.

I guess it was some _theoretical_ task scheduling stuff.... When you are doing task scheduling, yes, maybe, depends on what you optimize for.

.... but this bug have nothing to do with that. This bug is about some accounting error.