Comment by Gigachad

4 days ago

Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.

The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.

Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch. My corporate VPN app takes 30 seconds to go from a blank screen to deciding if it’s going to prompt me for credentials or remember my login, every morning. This is on an i9 with 64GB ram, and 1GN fiber.

On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.

  • > Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch.

    > On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional

    I just launched IntelliJ (first time since reboot). Took maybe 2 seconds to the projects screen. I clicked a random project and was editing it 2 seconds after that.

    I tried Twitter, Reddit, AirBnB, and tried to count the loading time. Twitter was the slowest at about 3 seconds.

    I have a 4 year old laptop. If you're seeing 10 second load times for every website and 20 second launch times for every app, you have something else going on. You mentioned corporate VPN, so I suspect you might have some heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.

    • > heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.

      Ugh, I personally witnessed this. I would wait to take my break until I knew the unavoidable, unkillable AV scans had started and would peg my CPU at 100%. I wonder how many human and energy resources are wasted checking for non-existant viruses on corp hardware.

      5 replies →

  • I'm on a four year old mid-tier laptop and opening VS Code takes maybe five seconds. Opening IDEA takes five seconds. Opening twitter on an empty cache takes perhaps four seconds and I believe I am a long way from their servers.

    On my work machine slack takes five seconds, IDEA is pretty close to instant, the corporate VPN starts nearly instantly (although the Okta process seems unnecessarily slow I'll admit), and most of the sites I use day-to-day (after Okta) are essentially instant to load.

    I would say that your experiences are not universal, although snappiness was the reason I moved to apple silicon macs in the first place. Perhaps Intel is to blame.

    • VS Code defers a lot of tasks to the background at least. This is a bit more visible in intellij; you seem to measure how long it takes to show its window, but how long does it take for it to warm up and finish indexing / loading everything, or before it actually becomes responsive?

      Anyway, five seconds is long for a text editor; 10, 15 years ago, sublime text loaded and opened up a file in <1 second, and it still does today. Vim and co are instant.

      Also keep in mind that desktop computers haven't gotten significantly faster for tasks like opening applications in the past years; they're more efficient (especially the M line CPUs) and have more hardware for specialist workloads like what they call AI nowadays, but not much innovation in application loading.

      You use a lot of words like "pretty close to", "nearly", "essentially", but 10, 20 years ago they WERE instant; applications from 10, 20 years ago should be so much faster today than they were on hardware from back then.

      I wish the big desktop app builders would invest in native applications. I understand why they go for web technology (it's the crossplatform GUI technology that Java and co promised and offers the most advanced styling of anything anywhere ever), but I wish they invested in it to bring it up to date.

      10 replies →

    • 5 seconds is a lot for a machine with an M4 Pro, and tons of RAM and a very fast SSD.

      There's native apps just as, if not more, complicated than VSCode that open faster.

      The real problem is electron. There's still good, performant native software out there. We've just settled on shipping a web browser with every app instead.

      1 reply →

    • It's probably more so that any corporate Windows box has dozens of extra security and metrics agents interrupting and blocking every network request and file open and OS syscall installed by IT teams while the Macs have some very basic MDM profile applied.

      1 reply →

    • This is my third high end workstation computer in the last 5 years and my experience has been roughly consistent with.

      My corporate vpn app is a disaster on so many levels, it’s an internally developed app as opposed to Okta or anything like that.

      I would likewise say that your experience is not universal, and that in many circumstances the situation is much worse. My wife is running an i5 laptop from 2020 and her work intranet is a 60 second load time. Outlook startup and sync are measured in minutes including mailbox fetching. You can say this is all not the app developers fault, but the crunch that’s installed on her machine is slowing things down by 5 or 10x and that slowdown wouldn’t be a big deal if the apps had reasonable load times in the first place.

  • > are all applications I use every day that take 20+ seconds to launch.

    I suddenly remembered some old Corel Draw version circa year 2005, which had loading screen enumerating random things it loaded and was computing until a final message "Less than a minute now...". It most often indeed lasted less than a minute to show interface :).

  • IMO they just don't think of "initial launch speed" as a meaningful performance stat to base their entire tech stack upon. Most of these applications and even websites, once opened, are going to be used for several hours/days/weeks before being closed by most of their users

  • For all the people who are doubting that applications are slow and that it must just be me - here [0] is a debugger that someone has built from the ground up that compiles, launches, attaches a debugger and hits a breakpoint in the same length of time that visual studio displays the splash screen for.

    [0] https://x.com/ryanjfleury/status/1747756219404779845

  • That sounds like a corporate anti-virus slowing everything down to me. vscode takes a few seconds to launch for me from within WSL2, with extensions. IntelliJ on a large project takes a while I'll give you that, but just intelliJ takes only a few seconds to launch.

    • Vscode is actually 10 seconds, you’re right.

      I have no corp antivirus or MDM on this machine, just windows 11 and windows defender.

  • Odd, I tested two news sides (tagesschau.de and bbc.com) and both load in 1 - 2 seconds. Airbnb in about 4 - 6 seconds though. My reddit never gets stuck, or if it does it's on all tabs because something goes wrong on their end.

  • All those things takes 4 seconds to launch or load on my M1. Not great, not bad.

    • Even 4-5 seconds is long enough for me to honestly get distracted. That is just so much time even on a single core computer from a decade ago.

      On my home PC, in 4 seconds I could download 500MB, load 12GB off an SSD, perform 12 billion cycles (before pipelining ) per core (and I have 24 of them) - and yet miro still manages to bring my computer to its knees for 15 seconds just to load an empty whiteboard.

  • HOW does Slack take 20s to load for you? My huge corporate Slack takes 2.5s to cold load.

    I'm so dumbfounded. Maybe non-MacOS, non-Apple silicon stuff is complete crap at that point? Maybe the complete dominance of Apple performance is understated?

    • I use Windows alongside my Mac Mini, and I would say they perform pretty similarly (but M-chip is definitely more power efficient).

      I don't use Slack, but I don't think anything takes 20 seconds for me. Maybe XCode, but I don't use it often enough to be annoyed.

      3 replies →

    • Most likely the engineers at many startups only use apple computers themselves and therefore only optimize performance for those systems. It's a shame but IMO result of their incompetence and not result of some magic apple performance gains.

    • Yes it is and the difference isn't understated, I think everyone knows by now that Apple has run away with laptop/desktop performance. They're just leagues ahead.

      It's a mix of better CPUs, better OS design (e.g. much less need for aggressive virus scanners), a faster filesystem, less corporate meddling, high end SSDs by default... a lot of things.

      1 reply →

What timescale are we talking about? Many DOS stock and accounting applications were basically instantaneous. There are some animations on iPhone that you can't disable that take longer than a series of keyboard actions of a skilled operator in the 90s. Windows 2k with a stripped shell was way more responsive that today's systems as long as you didn't need to hit the harddrives.

The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.

I guess you don't need to wrestle with Xcode?

Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.

E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.

After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).

Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):

On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.

E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.

I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.

  • You're comparing the art(!) of two different games, that targeted two different sets of hardware while using the ideal hardware for one and not the other. Kind of a terrible example.

    • > You're comparing the art(!)

      The art direction, modelling and animation work is mostly fine, the worse look results from the lack of dynamic lighting and ambient occlusion in the Oblivion Remaster when switching Lumen (UE5's realtime global illumination feature) to the lowest setting, this results in completely flat lighting for the vegetation but is needed to get an acceptable base frame rate (it doesn't solve the random stuttering though).

      Basically, the best art will always look bad without good lighting (and even baked or faked ambient lighting like in Skyrim looks better than no ambient lighting at all.

      Digital Foundry has an excellent video about the issues:

      https://www.youtube.com/watch?v=p0rCA1vpgSw

      TL;DR: the 'ideal hardware' for the Oblivion Remaster doesn't exist, even if you get the best gaming rig money can buy.

      3 replies →

I just clicked on the network icon next to the clock on a Windows 11 laptop. A gray box appeared immediately, about one second later all the buttons for wifi, bluetooth, etc appeared. Windows is full of situations like this, that require no network calls, but still take over one second to render.

  • It's strange, it visibly loading the buttons is indicative they use async technology that can use multithreaded CPUs effectively... but it's slower than the old synchronous UI stuff.

    I'm sure it's significantly more expensive to render than Windows 3.11 - XP were - rounded corners and scalable vector graphics instead of bitmaps or whatever - but surely not that much? And the resulting graphics can be cached.

    • Windows 3.1 wasn't checking WiFi, Bluetooth, energy saving profile, night light setting, audio devices, current power status and battery level, audio devices, and more when clicking the non-existent icon on the non-existent taskbar. Windows XP didn't have this quick setting area at all. But I do recall having the volume slider take a second to render on XP from time to time, and that was only rendering a slider.

      And FWIW this stuff is then cached. I hadn't clicked that setting area in a while (maybe the first time this boot?) and did get a brief gray box that then a second later populated with all the buttons and settings. Now every time I click it again it appears instantly.

      15 replies →

    • XP had gray boxes and laggy menus like you wouldn't believe. It didn't even do search in the start menu, and maybe that was for the best because even on an SSD its search functionality was dog slow.

      A clean XP install in a VM for nostalgia's sake is fine, but XP as actually used by people for a while quickly ground to a halt because of all the third party software you needed.

      The task bar was full of battery widgets, power management icons, tray icons for integrated drivers, and probably at least two WiFi icons, and maybe two Bluetooth ones as well. All of them used different menus that are slow in their own respect, despite being a 200KiB executable that looks like it was written in 1995.

      And the random crashes, there were so many random crashes. Driver programmes for basic features crashed all the time. Keeping XP running for more than a day or two by using sleep mode was a surefire way to get an unusual OS.

      Modern Windows has its issues but the olden days weren't all that great, we just tolerated more bullshit.

    • Honestly it behaves like the interface is some Electron app that has to load the visual elements from a little internal webserver. That would be a very silly way to build an OS UI though, so I don't know what Microsoft is doing.

  • Yep. I suspect GP has just gotten used to this and it is the new “snappy” to them.

    I see this all the time with people who have old computers.

    “My computer is really fast. I have no need to upgrade”

    I press cmd+tab and watch it take 5 seconds to switch to the next window.

    That’s a real life interaction I had with my parents in the past month. People just don’t know what they’re missing out on if they aren’t using it daily.

    • Yeah, I play around with retro computers all the time. Even with IO devices that are unthinkably performant compared to storage hardware actually common at the time these machines are often dog slow. Just rendering JPEGs can be really slow.

      Maybe if you're in a purely text console doing purely text things 100% in memory it can feel snappy. But the moment you do anything graphical or start working on large datasets its so incredibly slow.

      I still remember trying to do photo editing on a Pentium II with a massive 64MB of RAM. Or trying to get decent resolutions scans off a scanner with a Pentium III and 128MB of RAM.

      6 replies →

  • This one drives me nuts.

    I have to stay connected to VPN to work, and if I see VPN is not connected I click to reconnect.

    If the VPN button hasn't loaded you end up turning on Airplane mode. Ouch.

  • Windows 11 shell partly uses React Native in the start button flyout. It's not a heavily optimized codebase.

There's a problem when people who aren't very sensitive to latency and try and track it, and that is that their perception of what "instant" actually means is wrong. For them, instant is like, one second. For someone who cares about latency, instant is less than 10 milliseconds, or whatever threshold makes the difference between input and result imperceptible. People have the same problem judging video game framerates because they don't compare them back to back very often (there are perceptual differences between framerates of 30, 60, 120, 300, and 500, at the minimum, even on displays incapable of refreshing at these higher speeds), but you'll often hear people say that 60 fps is "silky smooth," which is not true whatsoever lol.

If you haven't compared high and low latency directly next to each other then there are good odds that you don't know what it looks like. There was a twitter video from awhile ago that did a good job showing it off that's one of the replies to the OP. It's here: https://x.com/jmmv/status/1671670996921896960

Sorry if I'm too presumptuous, however; you might be completely correct and instant is instant in your case.

  • Sure, but there's not limit to what people can decide to care about. There will always be people who want more speed and less latency, but the question is: are they right to do so?

    I'm with the person you're responding. I use the regular suite of applications and websites on my 2021 M1 Macbook. Things seem to load just fine.

  • > For someone who cares about latency, instant is less than 10 milliseconds

    Click latency of the fastest input devices is about 1ms and with a 120Hz screen you're waiting 8.3ms between frames. If someone is annoyed by 10ms of latency they're going to have a hard time in the real world where everything takes longer than that.

    I think the real difference is that 1-3 seconds is completely negligible launch time for an app when you're going to be using it all day or week, so most people do not care. That's effectively instant.

    The people who get irrationally angry that their app launch took 3 seconds out of their day instead of being ready to go on the very next frame are just never going to be happy.

    • I think you're right, maybe the disconnect is UI slowness?

      I am annoyed at the startup time of programs that I keep closed and only open infrequently (Discord is one of those, the update loop takes a buttload of time because I don't use it daily), but I'm not annoyed when something I keep open takes 1-10s to open.

      But when I think of getting annoyed it's almost always because an action I'm doing takes too long. I grew up in an era with worse computers than we have today, but clicking a new list was perceptibly instant- it was like the computer was waiting for the screen to catch up.

      Today, it feels like the computer chugs to show you what you've clicked on. This is especially true with universal software, like chat programs, that everyone in an org is using.

      I think Casey Muratori's point about the watch window in visual studio is the right one. The watch window used to be instant, but someone added an artificial delay to start processing so that the CPU wouldn't work when stepping fast through the code. The result is that, well, you gotta wait for the watch window to update... Which "feels bad".

      https://www.youtube.com/watch?v=GC-0tCy4P1U

  • I fear that such comments are similar to the old 'a monster cable makes my digital audio sound more mellow!'

    The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.

    • Well if you believe that, start up a video game with a framerate limiter and set your game's framerate limit to 10 fps and tell me how much you enjoy the experience. By default your game will likely be running at either 60 fps or 120 fps if you're vertical synced (depends on your monitor's refresh rate). Make sure to switch back and forth between 10 and 60/120 to compare.

      Even your average movie captures at 24 hz. Again, very likely you've never actually just compared these things for yourself back to back, as I mentioned originally.

      1 reply →

    • >The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.

      It takes effectively no effort to conduct such a study yourself. Just try re-encoding a video at different frame rates up to your monitor refresh rate. Or try looking at a monitor that has a higher refresh rate than the one you normally use.

    • > The eye perceives at about 10 hz.

      Not sure what this means; the eye doesn’t perceive anything. Maybe you’re thinking of saccades or round-trip response times or something else? Those are in the ~100ms range, but that’s different from whether the eye can see something.

      This paper shows pictures can be recognized at 13ms, which is faster than 60hz, and that’s for full scenes, not even motion tracking or small localized changes. https://link.springer.com/article/10.3758/s13414-013-0605-z

      1 reply →

    • Modern operating systems run at 120 or 144 hz screen refresh rates nowadays, I don't know if you're used to it yet but try and go back to 60, it should be pretty obivous when you move your mouse.

I'd wager that a 2021 MacBook, like the one I have, is stronger than the laptop used by majority of people in the world.

Life on an entry or even mid level windows laptop is a very different world.

  • Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.

    A few years ago I accidentally left my laptop at work on a Friday afternoon. Instead of going into the office, I pulled out a first generation raspberry pi and got everything set up on that. Needless to say, our nodejs app started pretty slowly. Not for any good reason - there were a couple modules which pulled in huge amounts of code which we didn’t use anyway. A couple hours work made the whole app start 5x faster and use half the ram. I would never have noticed that was a problem with my snappy desktop.

    • > Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.

      Same thing happens with UI & Website design. When the designers and front-end devs all have top-spec MacBooks, with 4k+ displays, they design to look good in that environment.

      Then you ship to the rest of the world which are still for the most part on 16:9 1920x1080 (or god forbid, 1366x768), low spec windows laptops and the UI looks like shit and is borderline unstable.

      Now I don't necessarily think things should be designed for the lowest common denominator, but at the very least we should be taking into consideration that the majority of users probably don't have super high end machines or displays. Even today you can buy a brand new "budget" windows laptop that'll come with 8GB of RAM, and a tiny 1920x1080 display, with poor color reproduction and crazy low brightness - and that's what the majority of people are using, if they are using a computer at all and not a phone or tablet.

  • I've found so many performance issues at work by booting up a really old laptop or working remotely from another continent. It's pretty straightforward to simulate either poor network conditions or generally low performance hardware, but we just don't generally bother to chase down those issues.

    • Oh yeah, I didn't even touch on devs being used to working on super faster internet.

      If you're on Mac, go install Network Link Conditioner and crank that download an upload speed way down. (Xcode > Open Developer Tools > More Developer Tools... > "Additional Tools for Xcode {Version}").

  • When I bought my current laptop, it was the cheapest one Costco had with 8 gigs of memory, which was at the time plenty for all but specialized uses. I've since upgraded it to 16, which feels like the current standard for that.

    But...why? Why on earth do I need 16 gigs of memory for web browsing and basic application use? I'm not even playing games on this thing. But there was an immediate, massive spike in performance when I upgraded the memory. It's bizarre.

    • Most cheap laptops these days ship with only one stick of RAM, and thus are only operating in single-channel mode. By adding another memory module, you can operate in dual-channel mode which can increase performance a lot. You can see the difference in performance by running a full memory test in single-channel mode vs multi-channel mode with a program like memtest86 or memtest86+ or others.

A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.

However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)

Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency. I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.

To note, people will have wildly different tolerance to delays and lag.

On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.

Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.

It really depends at what you look.

You say snappy, but what is snappy? I right now have a toy project in progress in zig that uses users perception as a core concept.

Rarely one can react to 10ms jank. But when you get to bare metal development 10ms becomes 10 million of reasonably high level instructions that can be done. Now go to website, click. If you can sense a delay from JS this means that jank is approximately 100ms; does clicking that button, really should be 100 million instructions?

When you look close enough you will find that not only it’s 100 million instructions but your operating system along with processor made tens of thousands of tricks in the background to minimize the jank and yet you still can sense it.

Today even writing in non optimized, unpopular languages like Prolog is viable because hardware is mindblowing fast, and yet some things are slow, because we utilize that speed to decrease development costs.

Spotify takes 7 seconds from clicking on its icon to playing a song on a 2024 top-of-the-range MacBook Pro. Navigating through albums saved on your computer can take several seconds. Double clicking on a song creates a 1/4sec pause.

This is absolutely remarkable inefficiency considering the application's core functionality (media players) was perfected a quarter century ago.

  • And on RhythmBox, on a 2017 laptop it works instantaneously. These big monetized apps were a huge mistake.

One example is Office. Microsoft is going back to preloading office during Windows Boot so that you don't notice it loading. With the average system spec 25 years ago it made sense to preload office. But today, what is Office doing that it needs to offload its startup to running at boot?

How long did your computer take to start up, from power off (and no hibernation, although that presumably wasn't a thing yet), the first time you got to use a computer?

How long did it take the last time you had to use an HDD rather than SSD for your primary drive?

How long did it take the first time you got to use an SSD?

How long does it take today?

Did literally anything other than the drive technology ever make a significant difference in that, in the last 40 years?

> Almost everything loads instantly on my 2021 MacBook

Instantly? Your applications don't have splash screens? I think you've probably just gotten used to however long it does take.

> 5 year old mobile CPUs load modern SPA web apps with no problems.

"An iPhone 11, which has 4GB of RAM (32x what the first-gen model had), can run the operating system and display a current-day webpage that does a few useful things with JavaScript".

This should sound like clearing a very low bar, but it doesn't seem to.

I think it's a very theoretical argument: we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."

  • we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."

    Says who? Who are these experienced people that know how to write fast software that think it is such a huge sacrifice?

    The reality is that people who say things like this don't actually know much about writing fast software because it really isn't that difficult. You just can't grab electron and the lastest javascript react framework craze.

    These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning C++ and sticking to javascript or python because that's what they learned first.

    • > These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning assembly and sticking to C++ or PERL because that's what they learned first.

      Why stop at C++? Is that what you happen to be comfortable with? Couldn't you create even faster software if you went down another level? Why don't you?

      16 replies →

  • > All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."

    Would we? Really? I don't think giving up performance needs to be a compromise for the number of features or speed of delivering them.

You're a pretty bad sample, that machine you're talking about probably cost >$2,000 new; and if it's an M-series chip; well that was a multi-generational improvement.

I (very recently I might add) used a Razer Blade 18, with i9 13950HX and 64G of DDR5 memory, and it felt awfully slow, not sure how much of that is Windows 11's fault however.

My daily driver is an M2 Macbook Air (or a Threadripper 3970x running linux); but the workers in my office? Dell Latitudes with an i5, 4 real cores and 16G of RAM if they're lucky... and of course, Windows 11.

Don't even ask what my mum uses at home, it cost less than my monthly food bill; and that's pretty normal for people who don't love computers.

People conflat the insanity of running a network cable through every application with the poor performance of their computers.

  • Correction: devs have made the mistake of turning everything into remote calls, without having any understanding as to the performance implications of doing so.

    Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.

    • Presumably they wanted the telemetry. It's not clear that this was a dev-initiated switch.

      Perhaps we can blame the 'statistical monetization' policies of adtech and then AI for all this -- i'm not entirely sold on developers.

      What, after all, is the difference between an `/etc/hosts` set of loop'd records vs. an ISP's dns -- as far as the software goes?

      3 replies →

    • We (probably) can guess the why - tracking and data opportunities which companies can eventually sell or utilize for profit is some way.

I think it’s a little more nuanced than the broad takes make it seem.

One of the biggest performance issues I witness is that everyone assumes a super fast, always on WiFi/5G connection. Very little is cached locally on device so even if I want to do a very simple search through my email inbox I have to wait on network latency. Sometimes that’s great, often it really isn’t.

Same goes for many SPA web apps. It’s not that my phone can’t process the JS (even though there’s way too much of it), it’s poor caching strategies that mean I’m downloading and processing >1MB of JS way more often than I should be. Even on a super fast connection that delay is noticeable.

The Nintendo Switch on a chipset that was outdated a decade ago can run Tears of the Kingdom. It's not sensible that modern hardware is anything less than instant.

  • That's because TOTK is designed to run on it, with careful compromises and a lot of manual tuning.

    Nintendo comes up with a working game first and then adds the story - BotW/TotK are post-apocalyptic so they don't have to show you too many people on screen at once.

    The other way you can tell this is that both games have the same story even though one is a sequel! Like Ganon takes over the castle/Hyrule and then Link defeats him, but then they go into the basement and somehow Ganon is there again and does the exact same thing again? Makes no sense.

    • > That's because TOTK is designed to run on it, with careful compromises and a lot of manual tuning.

      Should I draw from this conclusion that modern software is not designed to run on modern hardware?

      1 reply →

    • The framing device for The Legend of Zelda games is that it's a mythological cycle in which Link, Ganon, and Zelda are periodically reborn and the plot begins anew with new characters. It lets them be flexible with the setting, side quests, and characters as the series progresses and it's been selling games for just shy of forty years.

      1 reply →

The proliferation of Electron apps is one of the main things. Discord, Teams, Slack, all dogshit slow. Uses over a gigabyte of RAM, and uses it poorly. There's a noticeable pause any time you do user input; type a character, click a button, whatever it is, it always takes just barely too long.

All of Microsoft's suite is garbage. Outlook, Visual Studio, OneNote.

Edge isn't slow, (shockingly) but you know what is? Every webpage. The average web page has 200 dependencies it needs to load--frameworks, ads, libraries, spyware--and each of those dependencies has a 99% latency of 2 seconds, which means on average, at least two of those dependencies takes 2 seconds to load, and the page won't load until they do.

Steam is slow as balls. It's 2025 and it's a 32 bit application for some reason.

At my day job, our users complain that our desktop application is slow. It is slow. We talk about performance a lot and how it will be a priority and it's important. Every release, we get tons of new features, and the software gets slower.

My shit? My shit's fast. My own tiny little fiefdom in this giant rat warrens is fast. It could be faster, but it's pretty fast. It's not embarrassing. When I look at a flamegraph of our code when my code is running, I really have to dig in to find where my code is taking up time. It's fine. I'm--I don't feel bad. It's fine.

I love this industry. We are so smart. We are so capable of so many amazing things. But this industry annoys me. We so rarely do any of them. We're given a problem, and the solution is some god forsaken abomination of an electron app running javascript code on the desktop and pumping bytes into and out of a fucking DOM. The most innovative shit we can come up with is inventing a virtual dumbass and putting it into everything. The most value we create is division, hate, fear, and loathing on social media.

I'm not mad. I'm just disappointed.

Online Word (or Microsoft 365, or whatever it is called) regularly took me 2 minutes to load a 120 page document. I'm being very literal here. You could see it load in real time approximately 1 page a second. And it wasn't a network issue, mind you. It was just that slow.

Worse, the document strained my laptop so much as I used it, I regularly had to reload the web-page.

Try forcefully closing VSCode and your browser, and see how long it takes to open them again. The same is true for most complex webpages/'webapps' (Slack, Discord, etc).

A lot of other native Mac stuff is also less than ideal. Terminal keeps getting stuck all the time, Mail app can take a while to render HTML emails, Xcode is Xcode, and so on.

They're comparing these applications to older applications that loaded instantly on much slower computers.

Both sides are right.

There is a ton of waste and bloat and inefficiency. But there's also a ton of stuff that genuinely does demand more memory and CPU. An incomplete list:

- Higher DPI displays use intrinsically more memory and CPU to paint and rasterize. My monitor's pixel array uses 4-6X more memory than my late 90s PC had in the entire machine.

- Better font rendering is the same.

- Today's UIs support Unicode, right to left text, accessibility features, different themes (dark/light at a minimum), dynamic scaling, animations, etc. A modern GUI engine is similar in difficulty to a modern game engine.

- Encryption everywhere means that protocols are no longer just opening a TCP connection but require negotiation of state and running ciphers.

- The Web is an incredibly rich presentation platform that comes with the overhead of an incredibly rich presentation platform. It's like PostScript meets a GUI library meets a small OS meets a document markup layer meets...

- The data sets we deal with today are often a lot larger.

- Some of what we've had to do to get 1000X performance itself demands more overhead: multiple cores, multiple threads, 64 bit addressing, sophisticated MMUs, multiple levels of cache, and memory layouts optimized for performance over compactness. Those older machines were single threaded machines with much more minimal OSes, memory managers, etc.

- More memory means more data structure overhead to manage that memory.

- Larger disks also demand larger structures to manage them, and modern filesystems have all kinds of useful features like journaling and snapshots that also add overhead.

... and so on.

  • Then you install Linux and get all that without the mess that is Win11. Inefficient software is inefficient software.

2021 MacBook and 2020 iPhone are not "old". Still using 2018 iPhone. Used a 2021 Macbook until a month ago.

In Carmack's Lex Fridman interview he says he knows C++ devs who still insist on using some ancient version of MSVC because it's *so fast* compared to the latest, on the latest hardware.

I notice a pattern in the kinds of software that people are complaining about. They tend to be user-facing interactive software that is either corporate, proprietary, SaaS, “late-stage” or contains large amounts of telemetry. Since I tend to avoid such software, the vast majority of software I use I have no complaints about with respect to speed and responsiveness. The biggest piece of corporate bloatware I have is Chromium which (only) takes 1-2 seconds to launch and my system is not particularly powerful. In the corporate world bloat is a proxy for sophistication, for them it is a desirable feature so you should expect it. They would rather you use several JavaScript frameworks when the job could be done with plain HTML because it shows how rich/important/fashionable/relevant/high-tech they are.

Your 2021 MacBook and 2020 iPhone are top of the line devices. They'll be fine.

Buy something for half that price or less, like most people would be able to, and see if you can still get the same results.

This is also why I'd recommend people with lower budgets to buy high-end second hand rather than recent mid/low tier hardware.

You are using a relatively high end computer and mobile device. Go and find a cheap laptop x86 and try doing the same. It will be extremely painful. Most of this is due to a combination of Windows 11 being absolute trash and JavaScript being used extensively in applications/websites. JavaScript is memory hog and can be extremely slow depending on how it is written (how you deal with loops massively affects the performance).

What is frustrating though that until relatively recently these devices would work fine with JS heavy apps and work really well with anything that is using a native toolkit.

It really depends on the software. I have the top-of-the-line M4 Max laptop with 128GB of memory. I recently switched from Zotero [1] to using papis [2] at the command line.

Zotero would take 30 seconds to a minute to start up. papis has no startup time as it's a cli app and searching is nearly instantaneous.

There is no reason for Zotero to be so slow. In fact, before switching I had to cut down on the number of papers it was managing because at one point it stopped loading altogether.

It's great you haven't run into poorly optimized software, but but not everyone is so lucky.

[1]: https://www.zotero.org/ [2]: https://github.com/papis/papis

It vastly depends on what software you're forced to use.

Here's some software I use all the time, which feels horribly slow, even on a new laptop:

Slack.

Switching channels on slack, even when you've just switched so it's all cached, is painfully slow. I don't know if they build in a 200ms or so delay deliberately to mask when it's not cached, or whether it's some background rendering, or what it is, but it just feels sluggish.

Outlook

Opening an email gives a spinner before it's opened. Emails are about as lightweight as it gets, yet you get a spinner. It's "only" about 200ms, but that's still 200ms of waiting for an email to open. Plain text emails were faster 25 years ago. Adding a subset of HTML shouldn't have caused such a massive regression.

Teams

Switching tabs on teams has the same delayed feeling as Slack. Every iteraction feels like it's waiting 50-100ms before actioning. Clicking an empty calendar slot to book a new event gives 30-50ms of what I've mentally internalised as "Electron blank-screen" but there's probably a real name out there for basically waiting for a new dialog/screen to even have a chrome, let alone content. Creating a new calendar event should be instant, it should not take 300-500ms or so of waiting for the options to render.

These are basic "productivity" tools in which every single interaction feels like it's gated behind at least a 50ms debounce waiting period, with often extra waiting for content on top.

Is the root cause network hops or telemetry? Is it some corporate antivirus stealing the computer's soul?

Ultimately the root cause doesn't actually matter, because no matter the cause, it still feels like I'm wading through treacle trying to interact with my computer.

  • Some of this is due to the adoption of React. GUI optimization techniques that used to be common are hard to pull off in the React paradigm. For instance, pre-rendering parts of the UI that are invisible doesn't mesh well with the React model in which the UI tree is actually being built or destroyed in response to user interactions and in which data gets loaded in response to that, etc. The "everything is functional" paradigm is popular for various legitimate reasons, although React isn't really functional. But what people often forget is that functional languages have a reputation for being slow...

  • I don't get any kind of spinner on Outlook opening emails. Especially emails which are pure text or only lightly stylized open instantly. Even emails with calendar invites load really fast, I don't see any kind of spinner graphic at all.

    Running latest Outlook on Windows 11, currently >1k emails in my Inbox folder, on an 11th gen i5, while also on a Teams call a ton of other things active on my machine.

    This is also a machine with a lot of corporate security tools sapping a lot of cycles.

    • I guess I shall screen record it, this is new-ish windows 11 laptop.

      ( This might also be a "new outlook" vs "out outlook" thing? )

      1 reply →

  • I’d take 50ms but in my experience it’s more like 250.

    • You're probably right, I'm likely massively underestimating the time, it's long enough to be noticable, but not so long that it feels instantly frustrating the first time, it just contributes to an overall sluggishness.

I’m sure you know this, but a reminder that modern devices cache a hell of a lot, even when you “quit” such that subsequent launches are faster. Such is the benefit of more RAM.

I could compare Slack to, say, HexChat (or any other IRC client). And yeah, it’s an unfair comparison in many ways – Slack has far more capabilities. But from another perspective, how many of them do you immediately need at launch? Surely the video calling code could be delayed until after the main client is up, etc. (and maybe it is, in which case, oh dear).

A better example is Visual Studio [0], since it’s apples to apples.

[0]: https://youtu.be/MR4i3Ho9zZY

A lot of nostalgia is at work here. Modern tech is amazing. If the old tools were actually better people would actually use them. Its not like you can't get them to work.

  • As a regular user of vim, tmux and cscope for programming in C, may I say that not only do I prefer the old tools, but I use them regularly.

I can never tell if all of these comments are exaggerations to make a point, or if some people really have computers so slow that everything takes 20 seconds to launch (like the other comment claims).

I'm sure some of these people are using 10 year old corporate laptops with heavy corporate anti-virus scanning, leading to slow startup times. However, I think a lot of people are just exaggerating. If it's not instantly open, it's too long for them.

I, too, can get programs like Slack and Visual Studio Code to launch in a couple seconds at most, in contrast to all of these comments claiming 20 second launch times. I also don't quit these programs, so the only time I see that load time is after an update or reboot. Even if every program did take 20 seconds to launch and I rebooted my computer once a week, the net time lost would be measured in a couple of minutes.

  • It's not an exaggeration.

    I have a 12 core Ryzen 9 with 64GB of RAM, and clicking the emoji reaction button in Signal takes long enough to render the fixed set of emojis that I've begun clicking the empty space where I know the correct emoji will appear.

    For years I've been hitting the Windows key, typing the three or four unique characters for the app I want and hitting enter, because the start menu takes too long to appear. As a side note, that no longer works since Microsoft decided that predictability isn't a valuable feature, and the list doesn't filter the same way every time or I get different results depending on how fast I type and hit enter.

    Lots of people literally outpace the fastest hardware on the market, and that is insane.

    • I have a 16 core Ryzen 9 with 128GB of RAM. I have not noticed any slowness in Signal. This might be caused by differences in our operating systems. It sounds like you run Windows. I run Gentoo Linux.

    • > It's not an exaggeration.

      The comment I quoted was about 20 second load times, not a slight delay before something is clickable. That's the exaggeration.

      FWIW, I don't see the same slowness in Signal, like the other poster.

Mine open instantly, as long as I only have one open at a time. The power users on HN likely encounter a lot of slow loading apps, like I do.

Apple unlike the other Silicon Valley giants has figured out that latency >>> throughput. Minimizing latency is much more important for making a program "feel" fast than maximizing latency. Some of the apps I interact with daily are Slack, Teams (ugh), Gmail, and YouTube and they are all slow as dogshit.

Yup, people run software on shitty computers and blame all the software.

The only slow (local) software I know is llvm and cpp compilers

Other are pretty fast

  • You have stories of people running 2021 MacBooks and complaining about performance. Those are not shitty computers.