Comment by _aavaa_
4 days ago
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
4 days ago
Except we've squandered that 1000x not on bounds checking but on countless layers of abstractions and inefficiency.
Am I taking crazy pills or are programs not nearly as slow as HN comments make them out to be? Almost everything loads instantly on my 2021 MacBook and 2020 iPhone. Every program is incredibly responsive. 5 year old mobile CPUs load modern SPA web apps with no problems.
The only thing I can think of that’s slow is Autodesk Fusion starting up. Not really sure how they made that so bad but everything else seems super snappy.
Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch. My corporate VPN app takes 30 seconds to go from a blank screen to deciding if it’s going to prompt me for credentials or remember my login, every morning. This is on an i9 with 64GB ram, and 1GN fiber.
On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional, and their core functionality has regressed significantly in the last decade. I’m not talking about features that I prefer, but as an example if you load two links in Reddit in two different tabs my experience has been that it’s 50/50 if they’ll actually both load or if one gets stuck either way skeletons.
> Slack, teams, vs code, miro, excel, rider/intellij, outlook, photoshop/affinity are all applications I use every day that take 20+ seconds to launch.
> On the website front - Facebook, twitter, Airbnb, Reddit, most news sites, all take 10+ seconds to load or be functional
I just launched IntelliJ (first time since reboot). Took maybe 2 seconds to the projects screen. I clicked a random project and was editing it 2 seconds after that.
I tried Twitter, Reddit, AirBnB, and tried to count the loading time. Twitter was the slowest at about 3 seconds.
I have a 4 year old laptop. If you're seeing 10 second load times for every website and 20 second launch times for every app, you have something else going on. You mentioned corporate VPN, so I suspect you might have some heavy anti-virus or corporate security scanning that's slowing your computer down more than you expect.
6 replies →
I'm on a four year old mid-tier laptop and opening VS Code takes maybe five seconds. Opening IDEA takes five seconds. Opening twitter on an empty cache takes perhaps four seconds and I believe I am a long way from their servers.
On my work machine slack takes five seconds, IDEA is pretty close to instant, the corporate VPN starts nearly instantly (although the Okta process seems unnecessarily slow I'll admit), and most of the sites I use day-to-day (after Okta) are essentially instant to load.
I would say that your experiences are not universal, although snappiness was the reason I moved to apple silicon macs in the first place. Perhaps Intel is to blame.
17 replies →
> are all applications I use every day that take 20+ seconds to launch.
I suddenly remembered some old Corel Draw version circa year 2005, which had loading screen enumerating random things it loaded and was computing until a final message "Less than a minute now...". It most often indeed lasted less than a minute to show interface :).
IMO they just don't think of "initial launch speed" as a meaningful performance stat to base their entire tech stack upon. Most of these applications and even websites, once opened, are going to be used for several hours/days/weeks before being closed by most of their users
For all the people who are doubting that applications are slow and that it must just be me - here [0] is a debugger that someone has built from the ground up that compiles, launches, attaches a debugger and hits a breakpoint in the same length of time that visual studio displays the splash screen for.
[0] https://x.com/ryanjfleury/status/1747756219404779845
That sounds like a corporate anti-virus slowing everything down to me. vscode takes a few seconds to launch for me from within WSL2, with extensions. IntelliJ on a large project takes a while I'll give you that, but just intelliJ takes only a few seconds to launch.
1 reply →
Odd, I tested two news sides (tagesschau.de and bbc.com) and both load in 1 - 2 seconds. Airbnb in about 4 - 6 seconds though. My reddit never gets stuck, or if it does it's on all tabs because something goes wrong on their end.
How does your vscode take 20+ seconds to launch? Mine launches in 2 seconds.
All those things takes 4 seconds to launch or load on my M1. Not great, not bad.
1 reply →
> This is on an i9
On which OS?
HOW does Slack take 20s to load for you? My huge corporate Slack takes 2.5s to cold load.
I'm so dumbfounded. Maybe non-MacOS, non-Apple silicon stuff is complete crap at that point? Maybe the complete dominance of Apple performance is understated?
7 replies →
What timescale are we talking about? Many DOS stock and accounting applications were basically instantaneous. There are some animations on iPhone that you can't disable that take longer than a series of keyboard actions of a skilled operator in the 90s. Windows 2k with a stripped shell was way more responsive that today's systems as long as you didn't need to hit the harddrives.
The "instant" today is really laggy compared to what we had. Opening Slack takes 5s on a flagship phone and opening a channel which I just had open and should be fully cached takes another 2s. When you type in JIRA the text entry lags and all the text on the page blinks just a tiny bit (full redraw). When pages load on non-flagship phones (i.e. most of the world), they lag a lot, which I can see on monitoring dashboards.
I guess you don't need to wrestle with Xcode?
Somehow the Xcode team managed to make startup and some features in newer Xcode versions slower than older Xcode versions running on old Intel Macs.
E.g. the ARM Macs are a perfect illustration that software gets slower faster than hardware gets faster.
After a very short 'free lunch' right after the Intel => ARM transition we're now back to the same old software performance regression spiral (e.g. new software will only be optimized until it feels 'fast enough', and that 'fast enough' duration is the same no matter how fast the hardware is).
Another excellent example is the recent release of the Oblivion Remaster on Steam (which uses the brand new UE5 engine):
On my somewhat medium-level PC I have to reduce the graphics quality in the Oblivion Remaster so much that the result looks worse than 14-year old Skyrim (especially outdoor environments), and that doesn't even result in a stable 60Hz frame rate, while Skyrim runs at a rock-solid 60Hz and looks objectively better in the outdoors.
E.g. even though the old Skyrim engine isn't by far as technologically advanced as UE5 and had plenty of performance issues at launch on a ca. 2010 PC, the Oblivion Remaster (which uses a "state of the art" engine) looks and performs worse than its own 14 years old predecessor.
I'm sure the UE5-based Oblivion remaster can be properly optimized to beat Skyrim both in looks and performance, but apparently nobody cared about that during development.
You're comparing the art(!) of two different games, that targeted two different sets of hardware while using the ideal hardware for one and not the other. Kind of a terrible example.
4 replies →
I just clicked on the network icon next to the clock on a Windows 11 laptop. A gray box appeared immediately, about one second later all the buttons for wifi, bluetooth, etc appeared. Windows is full of situations like this, that require no network calls, but still take over one second to render.
It's strange, it visibly loading the buttons is indicative they use async technology that can use multithreaded CPUs effectively... but it's slower than the old synchronous UI stuff.
I'm sure it's significantly more expensive to render than Windows 3.11 - XP were - rounded corners and scalable vector graphics instead of bitmaps or whatever - but surely not that much? And the resulting graphics can be cached.
18 replies →
Yep. I suspect GP has just gotten used to this and it is the new “snappy” to them.
I see this all the time with people who have old computers.
“My computer is really fast. I have no need to upgrade”
I press cmd+tab and watch it take 5 seconds to switch to the next window.
That’s a real life interaction I had with my parents in the past month. People just don’t know what they’re missing out on if they aren’t using it daily.
7 replies →
This one drives me nuts.
I have to stay connected to VPN to work, and if I see VPN is not connected I click to reconnect.
If the VPN button hasn't loaded you end up turning on Airplane mode. Ouch.
Windows 11 shell partly uses React Native in the start button flyout. It's not a heavily optimized codebase.
2 replies →
There's a problem when people who aren't very sensitive to latency and try and track it, and that is that their perception of what "instant" actually means is wrong. For them, instant is like, one second. For someone who cares about latency, instant is less than 10 milliseconds, or whatever threshold makes the difference between input and result imperceptible. People have the same problem judging video game framerates because they don't compare them back to back very often (there are perceptual differences between framerates of 30, 60, 120, 300, and 500, at the minimum, even on displays incapable of refreshing at these higher speeds), but you'll often hear people say that 60 fps is "silky smooth," which is not true whatsoever lol.
If you haven't compared high and low latency directly next to each other then there are good odds that you don't know what it looks like. There was a twitter video from awhile ago that did a good job showing it off that's one of the replies to the OP. It's here: https://x.com/jmmv/status/1671670996921896960
Sorry if I'm too presumptuous, however; you might be completely correct and instant is instant in your case.
Sure, but there's not limit to what people can decide to care about. There will always be people who want more speed and less latency, but the question is: are they right to do so?
I'm with the person you're responding. I use the regular suite of applications and websites on my 2021 M1 Macbook. Things seem to load just fine.
> For someone who cares about latency, instant is less than 10 milliseconds
Click latency of the fastest input devices is about 1ms and with a 120Hz screen you're waiting 8.3ms between frames. If someone is annoyed by 10ms of latency they're going to have a hard time in the real world where everything takes longer than that.
I think the real difference is that 1-3 seconds is completely negligible launch time for an app when you're going to be using it all day or week, so most people do not care. That's effectively instant.
The people who get irrationally angry that their app launch took 3 seconds out of their day instead of being ready to go on the very next frame are just never going to be happy.
1 reply →
I fear that such comments are similar to the old 'a monster cable makes my digital audio sound more mellow!'
The eye percieves at about 10 hz. That's 100ms per capture. All the rest, I'd have to see a study that shows how any higher framerate can possibly be perceived or useful.
6 replies →
I'd wager that a 2021 MacBook, like the one I have, is stronger than the laptop used by majority of people in the world.
Life on an entry or even mid level windows laptop is a very different world.
Yep. Developers make programs run well enough on the hardware sitting on our desks. So long as we’re well paid (and have decent computers ourselves), we have no idea what the average computing experience is for people still running 10yo computers which were slow even for the day. And that keeps the treadmill going. We make everyone need to upgrade every few years.
A few years ago I accidentally left my laptop at work on a Friday afternoon. Instead of going into the office, I pulled out a first generation raspberry pi and got everything set up on that. Needless to say, our nodejs app started pretty slowly. Not for any good reason - there were a couple modules which pulled in huge amounts of code which we didn’t use anyway. A couple hours work made the whole app start 5x faster and use half the ram. I would never have noticed that was a problem with my snappy desktop.
1 reply →
I've found so many performance issues at work by booting up a really old laptop or working remotely from another continent. It's pretty straightforward to simulate either poor network conditions or generally low performance hardware, but we just don't generally bother to chase down those issues.
1 reply →
When I bought my current laptop, it was the cheapest one Costco had with 8 gigs of memory, which was at the time plenty for all but specialized uses. I've since upgraded it to 16, which feels like the current standard for that.
But...why? Why on earth do I need 16 gigs of memory for web browsing and basic application use? I'm not even playing games on this thing. But there was an immediate, massive spike in performance when I upgraded the memory. It's bizarre.
1 reply →
A mix of both. There are large number of websites that are inefficiently written using up unnecessary amounts of resources. Semi-modern devices make up for that by just having a massive amount of computing power.
However, you also need to consider 2 additional factors. Macbooks and iPhones, even 4 year old ones, have usually been at the upper end of the scale for processing power. (When compared to the general mass-market of private end-consumer devices)
Try doing the same on a 4 year old 400 Euro laptop and it might look a bit different. Also consider your connection speed and latency. I usually have no loading issue either. But I have a 1G fiber connection. My parents don't.
It really depends at what you look.
You say snappy, but what is snappy? I right now have a toy project in progress in zig that uses users perception as a core concept.
Rarely one can react to 10ms jank. But when you get to bare metal development 10ms becomes 10 million of reasonably high level instructions that can be done. Now go to website, click. If you can sense a delay from JS this means that jank is approximately 100ms; does clicking that button, really should be 100 million instructions?
When you look close enough you will find that not only it’s 100 million instructions but your operating system along with processor made tens of thousands of tricks in the background to minimize the jank and yet you still can sense it.
Today even writing in non optimized, unpopular languages like Prolog is viable because hardware is mindblowing fast, and yet some things are slow, because we utilize that speed to decrease development costs.
To note, people will have wildly different tolerance to delays and lag.
On the extreme, my retired parents don't feel the difference between 5s or 1s when loading a window or clicking somewhere. I offered a switch to a new laptop, cloning their data, and they didn't give a damn and just opened the laptop the closest to them.
Most people aren't that desensitized, but for some a 600ms delay is instantaneous when for other it's 500ms too slow.
Spotify takes 7 seconds from clicking on its icon to playing a song on a 2024 top-of-the-range MacBook Pro. Navigating through albums saved on your computer can take several seconds. Double clicking on a song creates a 1/4sec pause.
This is absolutely remarkable inefficiency considering the application's core functionality (media players) was perfected a quarter century ago.
And on RhythmBox, on a 2017 laptop it works instantaneously. These big monetized apps were a huge mistake.
1 reply →
One example is Office. Microsoft is going back to preloading office during Windows Boot so that you don't notice it loading. With the average system spec 25 years ago it made sense to preload office. But today, what is Office doing that it needs to offload its startup to running at boot?
It depends. Can Windows 3.11 be faster than Windows 11? Sure, maybe even in most cases: https://jmmv.dev/2023/06/fast-machines-slow-machines.html
How long did your computer take to start up, from power off (and no hibernation, although that presumably wasn't a thing yet), the first time you got to use a computer?
How long did it take the last time you had to use an HDD rather than SSD for your primary drive?
How long did it take the first time you got to use an SSD?
How long does it take today?
Did literally anything other than the drive technology ever make a significant difference in that, in the last 40 years?
> Almost everything loads instantly on my 2021 MacBook
Instantly? Your applications don't have splash screens? I think you've probably just gotten used to however long it does take.
> 5 year old mobile CPUs load modern SPA web apps with no problems.
"An iPhone 11, which has 4GB of RAM (32x what the first-gen model had), can run the operating system and display a current-day webpage that does a few useful things with JavaScript".
This should sound like clearing a very low bar, but it doesn't seem to.
I think it's a very theoretical argument: we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
we could of course theoretically make everything even faster. It's nowhere near the most optimal use of the available hardware. All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
Says who? Who are these experienced people that know how to write fast software that think it is such a huge sacrifice?
The reality is that people who say things like this don't actually know much about writing fast software because it really isn't that difficult. You just can't grab electron and the lastest javascript react framework craze.
These kinds of myths get perpetuated by people who repeat it without having experienced the side of just writing native software. I think mostly it is people rationalizing not learning C++ and sticking to javascript or python because that's what they learned first.
19 replies →
> All we'd have to give up is squishy hard-to-measure things like "feature sets" and "engineering velocity."
Would we? Really? I don't think giving up performance needs to be a compromise for the number of features or speed of delivering them.
1 reply →
You're a pretty bad sample, that machine you're talking about probably cost >$2,000 new; and if it's an M-series chip; well that was a multi-generational improvement.
I (very recently I might add) used a Razer Blade 18, with i9 13950HX and 64G of DDR5 memory, and it felt awfully slow, not sure how much of that is Windows 11's fault however.
My daily driver is an M2 Macbook Air (or a Threadripper 3970x running linux); but the workers in my office? Dell Latitudes with an i5, 4 real cores and 16G of RAM if they're lucky... and of course, Windows 11.
Don't even ask what my mum uses at home, it cost less than my monthly food bill; and that's pretty normal for people who don't love computers.
People conflat the insanity of running a network cable through every application with the poor performance of their computers.
Correction: devs have made the mistake of turning everything into remote calls, without having any understanding as to the performance implications of doing so.
Sonos’ app is a perfect example of this. The old app controlled everything locally, since the speakers set up their own wireless mesh network. This worked fantastically well. Someone at Sonos got the bright idea to completely rewrite the app such that it wasn’t even backwards-compatible with older hardware, and everything is now a remote calls. Changing volume? Phone —> Router —> WAN —> Cloud —> Router —> Speakers. Just… WHY. This failed so spectacularly that the CEO responsible stepped down / was forced out, and the new one claims that fixing the app is his top priority. We’ll see.
5 replies →
I think it’s a little more nuanced than the broad takes make it seem.
One of the biggest performance issues I witness is that everyone assumes a super fast, always on WiFi/5G connection. Very little is cached locally on device so even if I want to do a very simple search through my email inbox I have to wait on network latency. Sometimes that’s great, often it really isn’t.
Same goes for many SPA web apps. It’s not that my phone can’t process the JS (even though there’s way too much of it), it’s poor caching strategies that mean I’m downloading and processing >1MB of JS way more often than I should be. Even on a super fast connection that delay is noticeable.
The Nintendo Switch on a chipset that was outdated a decade ago can run Tears of the Kingdom. It's not sensible that modern hardware is anything less than instant.
That's because TOTK is designed to run on it, with careful compromises and a lot of manual tuning.
Nintendo comes up with a working game first and then adds the story - BotW/TotK are post-apocalyptic so they don't have to show you too many people on screen at once.
The other way you can tell this is that both games have the same story even though one is a sequel! Like Ganon takes over the castle/Hyrule and then Link defeats him, but then they go into the basement and somehow Ganon is there again and does the exact same thing again? Makes no sense.
4 replies →
The proliferation of Electron apps is one of the main things. Discord, Teams, Slack, all dogshit slow. Uses over a gigabyte of RAM, and uses it poorly. There's a noticeable pause any time you do user input; type a character, click a button, whatever it is, it always takes just barely too long.
All of Microsoft's suite is garbage. Outlook, Visual Studio, OneNote.
Edge isn't slow, (shockingly) but you know what is? Every webpage. The average web page has 200 dependencies it needs to load--frameworks, ads, libraries, spyware--and each of those dependencies has a 99% latency of 2 seconds, which means on average, at least two of those dependencies takes 2 seconds to load, and the page won't load until they do.
Steam is slow as balls. It's 2025 and it's a 32 bit application for some reason.
At my day job, our users complain that our desktop application is slow. It is slow. We talk about performance a lot and how it will be a priority and it's important. Every release, we get tons of new features, and the software gets slower.
My shit? My shit's fast. My own tiny little fiefdom in this giant rat warrens is fast. It could be faster, but it's pretty fast. It's not embarrassing. When I look at a flamegraph of our code when my code is running, I really have to dig in to find where my code is taking up time. It's fine. I'm--I don't feel bad. It's fine.
I love this industry. We are so smart. We are so capable of so many amazing things. But this industry annoys me. We so rarely do any of them. We're given a problem, and the solution is some god forsaken abomination of an electron app running javascript code on the desktop and pumping bytes into and out of a fucking DOM. The most innovative shit we can come up with is inventing a virtual dumbass and putting it into everything. The most value we create is division, hate, fear, and loathing on social media.
I'm not mad. I'm just disappointed.
Online Word (or Microsoft 365, or whatever it is called) regularly took me 2 minutes to load a 120 page document. I'm being very literal here. You could see it load in real time approximately 1 page a second. And it wasn't a network issue, mind you. It was just that slow.
Worse, the document strained my laptop so much as I used it, I regularly had to reload the web-page.
Try forcefully closing VSCode and your browser, and see how long it takes to open them again. The same is true for most complex webpages/'webapps' (Slack, Discord, etc).
A lot of other native Mac stuff is also less than ideal. Terminal keeps getting stuck all the time, Mail app can take a while to render HTML emails, Xcode is Xcode, and so on.
They're comparing these applications to older applications that loaded instantly on much slower computers.
Both sides are right.
There is a ton of waste and bloat and inefficiency. But there's also a ton of stuff that genuinely does demand more memory and CPU. An incomplete list:
- Higher DPI displays use intrinsically more memory and CPU to paint and rasterize. My monitor's pixel array uses 4-6X more memory than my late 90s PC had in the entire machine.
- Better font rendering is the same.
- Today's UIs support Unicode, right to left text, accessibility features, different themes (dark/light at a minimum), dynamic scaling, animations, etc. A modern GUI engine is similar in difficulty to a modern game engine.
- Encryption everywhere means that protocols are no longer just opening a TCP connection but require negotiation of state and running ciphers.
- The Web is an incredibly rich presentation platform that comes with the overhead of an incredibly rich presentation platform. It's like PostScript meets a GUI library meets a small OS meets a document markup layer meets...
- The data sets we deal with today are often a lot larger.
- Some of what we've had to do to get 1000X performance itself demands more overhead: multiple cores, multiple threads, 64 bit addressing, sophisticated MMUs, multiple levels of cache, and memory layouts optimized for performance over compactness. Those older machines were single threaded machines with much more minimal OSes, memory managers, etc.
- More memory means more data structure overhead to manage that memory.
- Larger disks also demand larger structures to manage them, and modern filesystems have all kinds of useful features like journaling and snapshots that also add overhead.
... and so on.
Then you install Linux and get all that without the mess that is Win11. Inefficient software is inefficient software.
2021 MacBook and 2020 iPhone are not "old". Still using 2018 iPhone. Used a 2021 Macbook until a month ago.
In Carmack's Lex Fridman interview he says he knows C++ devs who still insist on using some ancient version of MSVC because it's *so fast* compared to the latest, on the latest hardware.
I notice a pattern in the kinds of software that people are complaining about. They tend to be user-facing interactive software that is either corporate, proprietary, SaaS, “late-stage” or contains large amounts of telemetry. Since I tend to avoid such software, the vast majority of software I use I have no complaints about with respect to speed and responsiveness. The biggest piece of corporate bloatware I have is Chromium which (only) takes 1-2 seconds to launch and my system is not particularly powerful. In the corporate world bloat is a proxy for sophistication, for them it is a desirable feature so you should expect it. They would rather you use several JavaScript frameworks when the job could be done with plain HTML because it shows how rich/important/fashionable/relevant/high-tech they are.
I have a 2019 Intel MacBook and Outlook takes about five seconds to load and constantly sputters
Your 2021 MacBook and 2020 iPhone are top of the line devices. They'll be fine.
Buy something for half that price or less, like most people would be able to, and see if you can still get the same results.
This is also why I'd recommend people with lower budgets to buy high-end second hand rather than recent mid/low tier hardware.
You are using a relatively high end computer and mobile device. Go and find a cheap laptop x86 and try doing the same. It will be extremely painful. Most of this is due to a combination of Windows 11 being absolute trash and JavaScript being used extensively in applications/websites. JavaScript is memory hog and can be extremely slow depending on how it is written (how you deal with loops massively affects the performance).
What is frustrating though that until relatively recently these devices would work fine with JS heavy apps and work really well with anything that is using a native toolkit.
It really depends on the software. I have the top-of-the-line M4 Max laptop with 128GB of memory. I recently switched from Zotero [1] to using papis [2] at the command line.
Zotero would take 30 seconds to a minute to start up. papis has no startup time as it's a cli app and searching is nearly instantaneous.
There is no reason for Zotero to be so slow. In fact, before switching I had to cut down on the number of papers it was managing because at one point it stopped loading altogether.
It's great you haven't run into poorly optimized software, but but not everyone is so lucky.
[1]: https://www.zotero.org/ [2]: https://github.com/papis/papis
It vastly depends on what software you're forced to use.
Here's some software I use all the time, which feels horribly slow, even on a new laptop:
Slack.
Switching channels on slack, even when you've just switched so it's all cached, is painfully slow. I don't know if they build in a 200ms or so delay deliberately to mask when it's not cached, or whether it's some background rendering, or what it is, but it just feels sluggish.
Outlook
Opening an email gives a spinner before it's opened. Emails are about as lightweight as it gets, yet you get a spinner. It's "only" about 200ms, but that's still 200ms of waiting for an email to open. Plain text emails were faster 25 years ago. Adding a subset of HTML shouldn't have caused such a massive regression.
Teams
Switching tabs on teams has the same delayed feeling as Slack. Every iteraction feels like it's waiting 50-100ms before actioning. Clicking an empty calendar slot to book a new event gives 30-50ms of what I've mentally internalised as "Electron blank-screen" but there's probably a real name out there for basically waiting for a new dialog/screen to even have a chrome, let alone content. Creating a new calendar event should be instant, it should not take 300-500ms or so of waiting for the options to render.
These are basic "productivity" tools in which every single interaction feels like it's gated behind at least a 50ms debounce waiting period, with often extra waiting for content on top.
Is the root cause network hops or telemetry? Is it some corporate antivirus stealing the computer's soul?
Ultimately the root cause doesn't actually matter, because no matter the cause, it still feels like I'm wading through treacle trying to interact with my computer.
Some of this is due to the adoption of React. GUI optimization techniques that used to be common are hard to pull off in the React paradigm. For instance, pre-rendering parts of the UI that are invisible doesn't mesh well with the React model in which the UI tree is actually being built or destroyed in response to user interactions and in which data gets loaded in response to that, etc. The "everything is functional" paradigm is popular for various legitimate reasons, although React isn't really functional. But what people often forget is that functional languages have a reputation for being slow...
I don't get any kind of spinner on Outlook opening emails. Especially emails which are pure text or only lightly stylized open instantly. Even emails with calendar invites load really fast, I don't see any kind of spinner graphic at all.
Running latest Outlook on Windows 11, currently >1k emails in my Inbox folder, on an 11th gen i5, while also on a Teams call a ton of other things active on my machine.
This is also a machine with a lot of corporate security tools sapping a lot of cycles.
2 replies →
I’d take 50ms but in my experience it’s more like 250.
1 reply →
I’m sure you know this, but a reminder that modern devices cache a hell of a lot, even when you “quit” such that subsequent launches are faster. Such is the benefit of more RAM.
I could compare Slack to, say, HexChat (or any other IRC client). And yeah, it’s an unfair comparison in many ways – Slack has far more capabilities. But from another perspective, how many of them do you immediately need at launch? Surely the video calling code could be delayed until after the main client is up, etc. (and maybe it is, in which case, oh dear).
A better example is Visual Studio [0], since it’s apples to apples.
[0]: https://youtu.be/MR4i3Ho9zZY
Compare it to qutecom, or any other xmpp client.
A lot of nostalgia is at work here. Modern tech is amazing. If the old tools were actually better people would actually use them. Its not like you can't get them to work.
As a regular user of vim, tmux and cscope for programming in C, may I say that not only do I prefer the old tools, but I use them regularly.
You live in the UNIX world, where this insanity is far less prevalent. Here is an example of what you are missing:
https://www.pcworld.com/article/2651749/office-is-too-slow-s...
I can never tell if all of these comments are exaggerations to make a point, or if some people really have computers so slow that everything takes 20 seconds to launch (like the other comment claims).
I'm sure some of these people are using 10 year old corporate laptops with heavy corporate anti-virus scanning, leading to slow startup times. However, I think a lot of people are just exaggerating. If it's not instantly open, it's too long for them.
I, too, can get programs like Slack and Visual Studio Code to launch in a couple seconds at most, in contrast to all of these comments claiming 20 second launch times. I also don't quit these programs, so the only time I see that load time is after an update or reboot. Even if every program did take 20 seconds to launch and I rebooted my computer once a week, the net time lost would be measured in a couple of minutes.
It's not an exaggeration.
I have a 12 core Ryzen 9 with 64GB of RAM, and clicking the emoji reaction button in Signal takes long enough to render the fixed set of emojis that I've begun clicking the empty space where I know the correct emoji will appear.
For years I've been hitting the Windows key, typing the three or four unique characters for the app I want and hitting enter, because the start menu takes too long to appear. As a side note, that no longer works since Microsoft decided that predictability isn't a valuable feature, and the list doesn't filter the same way every time or I get different results depending on how fast I type and hit enter.
Lots of people literally outpace the fastest hardware on the market, and that is insane.
2 replies →
Watch this https://www.youtube.com/watch?v=GC-0tCy4P1U
Mine open instantly, as long as I only have one open at a time. The power users on HN likely encounter a lot of slow loading apps, like I do.
Apple unlike the other Silicon Valley giants has figured out that latency >>> throughput. Minimizing latency is much more important for making a program "feel" fast than maximizing latency. Some of the apps I interact with daily are Slack, Teams (ugh), Gmail, and YouTube and they are all slow as dogshit.
Lightroom non-user detected
[dead]
Yup, people run software on shitty computers and blame all the software.
The only slow (local) software I know is llvm and cpp compilers
Other are pretty fast
You have stories of people running 2021 MacBooks and complaining about performance. Those are not shitty computers.
Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
The market mostly didn't want 50% faster code as much as it wanted an app that didn't exist before.
If I look at the apps I use on a day to day basis that are dog slow and should have been optimized (e.g. slack, jira), it's not really a lack of the industry's engineering capability to speed things up that was the core problem, it is just an instance the principal-agent problem - i.e. I'm not the one buying, I don't get to choose not to use it and dog-slow is just one of many the dimensions in which they're terrible.
I don’t think abundance vs speed is the right lens.
No user actually wants abundance. They use few programs and would benwfit if those programs were optimized.
Established apps could be optimized to the hilt.
But they seldom are.
> They use few programs
Yes but it's a different 'few programs' than 99% of all other users, so we're back to square one.
>No user actually wants abundance.
No, all users just want the few programs which they themselves need. The market is not one user, though. It's all of them.
3 replies →
Did people make this exchange or did __the market__? I feel like we're assigning a lot of intention to a self-accelerating process.
You add a new layer of indirection to fix that one problem on the previous layer, and repeat it ad infinitum until everyone is complaining about having too many layers of indirection, yet nobody can avoid interacting with them, so the only short-term solution is a yet another abstraction.
> Most of it was exchanged for abstractions which traded runtime speed for the ability to create apps quickly and cheaply.
Really? Because while abstractions like that exist (i.e. a webserver frameworks, reactivity, SQL and ORMs etc), I would argue that these aren't the abstractions that cause the most maintenance and performance issues. These are usually in the domain/business application and often not something that made anything quicker to develop or anything, but instead created by a developer that just couldn't help themselves
> ORMs
Certain ORMs such as Rails's ActiveRecord are part of the problem because they create the illusion that local memory access and DB access are the same thing. This can lead to N+1 queries and similar issues. The same goes for frameworks that pretend that remote network calls are just a regular method access (thankfully, such frameworks seem to have become largely obsolete).
1 reply →
I think they’re referring to Electron.
Edit: and probably writing backends in Python or Ruby or JavaScript.
3 replies →
The major slowdown of modern applications is network calls. Spend 50-500ms a pop for a few kilos of data. Many modern applications will spin up a half dozen blocking network calls casually.
This is something I've wished to eliminate too. Maybe we just cast the past 20 years as the "prototyping phase" of modern infrastructure.
It would be interesting to collect a roadmap for optimizing software at scale -- where is there low hanging fruit? What are the prime "offenders"?
Call it a power saving initiative and get environmentally-minded folks involved.
IMO, the prime offender is simply not understanding fundamentals. From simple things like “a network call is orders of magnitude slower than a local disk, which is orders of magnitude slower than RAM…” (and moreover, not understanding that EBS et al. are networked disks, albeit highly specialized and optimized), or doing insertions to a DB by looping over a list and writing each row individually.
I have struggled against this long enough that I don’t think there is an easy fix. My current company is the first I’ve been at that is taking it seriously, and that’s only because we had a spate of SEV0s. It’s still not easy, because a. I and the other technically-minded people have to find the problems, then figure out how to explain them b. At its heart, it’s a culture war. Properly normalizing your data model is harder than chucking everything into JSON, even if the former will save you headaches months down the road. Learning how to profile code (and fix the problems) may not be exactly hard, but it’s certainly harder than just adding more pods to your deployment.
Use of underpowered databases and abstractions that don't eliminate round-trips is a big one. The hardware is fast but apps take seconds to load because on the backend there's a lot of round-trips to the DB and back, and the query mix is unoptimized because there are no DBAs anymore.
It's the sort of thing that can be handled via better libraries, if people use them. Instead of Hibernate use a mapper like Micronaut Data. Turn on roundtrip diagnostics in your JDBC driver, look for places where they can be eliminated by using stored procedures. Have someone whose job is to look out for slow queries and optimize them, or pay for a commercial DB that can do that by itself. Also: use a database that lets you pipeline queries on a connection and receive the results asynchronously, along with server languages that make it easy to exploit that for additional latency wins.
> on countless layers of abstractions
Even worse, our bottom most abstraction layers pretend that we are running on a single core system from the 80s. Even Rust got hit by that when it pulled getenv from C instead of creating a modern and safe replacement.
And text that is not a pixely or blurry mess. And Unicode.
Unicode worked since Plan9. And antialiasing it's from the early 90's.