← Back to context

Comment by root_axis

15 hours ago

> Computers have been running thousands of times slower than they should be for decades

I've been hearing this complaint for decades and I'll never understand it. The suggestion seems completely at odds with my own experience. Regardless of OS, they all seem extremely fast, and feel faster and faster as time goes on.

I remember a time when I could visually see the screen repaint after minimizing a window, or waiting 3 minutes for the OS to boot, or waiting 30 minutes to install a 600mb video game from local media. My m2 air with 16gb of memory only has to reboot for updates, I haphazardly open 100 browser tabs, run spotify, slack, an IDE, build whatever project I'm working on, and the machine occasionally gets warm. Everything works fine, I never have performance issues. My linux machines, gaming pc, and phone feel just as snappy. It feels to me that we are living in a golden age of computer performance.

I think the best example is in iOS. On old iOS versions, the keyboard responsiveness took precedence over everything, no matter what. If you touched the keyboard, it would respond with an animation indicating what you are doing. The app itself may be frozen, but the self contained keyboard process would continue on, letting you know the app you are using is a buggy mess.

Now in iOS 26, you can just be typing in Notes or just the safari address bar for example, and the keyboard will randomly lag behind and freeze, likely because it is waiting on some autocomplete task to run on the keyboard process itself. And this is on top of the line, modern hardware.

A lot of the fundamentals that were focused on in the past to ensure responsiveness to user input was never lost, became lost. And lost for no real good reason, other than lazy development practices, unnecessary abstraction layers, and other modern developer conveniences.

  • Yeah long ago when I was doing some iOS development, I can remember Apple UX responsives mantras like “don’t block the main thread”, as it’s the thing responsible for making app UIs snappy even when something is happening.

    Nowadays seems like half of Apple’s own software blocks on their main thread, like you said things like keyboard lock up for no reason. God forbid you try to paste too much text into a Note - the paste will crawl to a halt. Or, on my M4 max MacBook, 128GB ram, 8tb ssd, Photos library all originals saved locally - I try to cmd-R to rotate an image - the rotation of a fully local image can sometimes take >10 seconds while showing a blocking UI “Rotating Image…”, it’s insane how low the bar has dropped for Apple software.

  • This trend was obvious when they started removing physical buttons. My thought was, man these people do put so much faith in software.

My M4 Max 128GB ... 90% of the time is like you say.

10% of the time, Windowserver takes off and spends 150% CPU. Or I develop keystroke lag. Or I can't get a terminal open because Time Machine has the backup volume in the half mounted state.

It's thousands of times faster than the Ultra 1 that was once on my desk. And I can certainly do workloads that fundamentally take thousands of times more cycles. But I usually spend a greater proportion of this machine's speed on the UI and responsiveness doesn't always win over 30 years ago.

  • Or contactsd lol

    Spotlight doesn’t make sense either.. caches get evicted, but there’s no logic that prevents it from building it back up immediately

    Log processes are fine, but they should never be able to use 100% / At the same priority (cpu+io)

>Regardless of OS, they all seem extremely fast, and feel faster and faster as time goes on.

One analogy is that the distance between two places in the world hasn't changed, but we're not arriving significantly faster than we before modern jetliners were invented. There was a period of new technology followed by rapid incremental progress toward shortened travel times until it leveled off.

However, the number of people able to consistently travel between more places in the world has continued to increase. New airports open regularly, and airliners have been optimized to fit more people, at the cost of passenger comfort.

Similarly, computers, operating systems, and their software aren't aligned in optimizing for user experience. Until a certain point, user interactions on MacOS took highest priority, which is why a single or dual core Mac felt more responsive than today, despite the capabilities and total work capacity of new Macs being orders of magnitude higher.

So we're not really even asking for the equivalent of faster jet planes, here, just wistfully remembering when we didn't need to arrive hours early to wait in lines and have to undress to get through security. Eventually all of us who remember the old era will be gone, and the next people will yearn for something that has changed from the experiences they shared.

Ok. Today we have multi-Ghz processors, with multiple cores at that.

Photons travel about 1 foot per nanosecond ... so the CPU can executes MANY instructions between the time photons leave your screen, and the time they reach your eyes.

Now, on Windows start Word (on a Mac start Writer) ... come on ... I'll wait.

Still with me? Don't blame the SSD and reload it again from the cache.

Weep.

  • Not sure where you're getting at. MS Word, full load to ready state after macOS reboot takes ~ 2 seconds on my M1 mac. If I close and re-open it (so it's on fs cache) is takes about ~1 second.

    • You, and sibling comment author just never experienced the truly responsive ui.

      It is one where reaction is under a single frame from action. EDIT: and frame is 1/60s, that is 16.(6)ms. I feel bad feeling I have to mention this basic fact.

      This was possible on 1980s hardware. I witnessed that, I used that. Why is it not possible now?

  • Base model M4 Mac Mini -- takes 2 seconds to load Word (and ready to type) without it being cached. Less than 1 second if I quit it completely, and launch again, which I assume is because it's cached in RAM.

> Regardless of OS, they all seem extremely fast, and feel faster and faster as time goes on.

This very much depends on what hardware you have and what you're doing on it (how much spare capacity you have).

Back in university I had a Techbite Zin 2, it had a Celeron N3350 and 4 GB of LPDDR4. It was affordable for me as a student (while I also had a PC in the dorm) and the keyboard was great and it worked out nicely for note taking and some web browsing when visiting parents in the countryside.

At the same time, the OS made a world of difference and it was anything but fast. Windows was pretty much unusable and it was the kind of hardware where you started to think whether you really need XFCE or whether LXDE would be enough.

I think both of the statements can be true: that Wirth's law is true and computers run way, way slower than they should due to bad software... and that normally you don't really feel it due to us throwing a lot of hardware at the problem to make us able to ignore it.

It's largely the same as you get with modern video game graphics and engines like UE5, where only now we are seeing horrible performance across the board that mainstream hardware often can't make up for and so devs reach for upscaling and framegen as something they demand you use (e.g. Borderlands 4), instead of just something to use for mobile gaming.

It's also like running ESLint and Prettier on your project and having a full build and formatting iteration take like 2 minutes without cache (though faster with cache), HOWEVER then you install Oxlint and Oxfmt and are surprised to find out that it takes SECONDS for the whole codebase. Maybe the "rewrite it in Rust" folks had a point. Bad code in Rust and similar languages will still run badly, but a fast runtime will make good code fly.

I could also probably compare the old Skype against modern Teams, or probably any split between the pre-Electron and modern day world.

Note: runtime in the loose sense, e.g. compiled native executables, vs the kind that also have GC, vs something like JVM and .NET, vs other interpreters like Python and Ruby and so on. Idk what you'd call it more precisely, execution model?

> Regardless of OS, they all seem extremely fast, and feel faster and faster as time goes on.

The modern throughput is faster by far. However, what some people mean when they talk about "slower" is the latency snappiness that characterizes early microcomputer systems. That has definitely gotten way worse in an empirically measurable fashion.

Dan Luu's article explains this very well [1].

It is difficult today to go through that lived experience of that low latency today because you don't appreciate it until you lived it for years. Few people have access to an Apple ][ rig with a composite monitor for years on end any longer. The hackers that experienced that low latency never forgot it, because the responsiveness feels like a fluid extension of your thoughts in a way higher latency systems cannot match.

[1] https://danluu.com/input-lag/*

  • I wonder if this ties into why I'm baffled at the increasing trend of adding fake delays (f/ex "view transitions"). It's maddening to me. It's generally not a masking/performance delay either; I've recompiled a number of android apps for example to remove these sorts of things, and some actions that took an entire second to complete previously happen instantly after modification.