Comment by Aurornis
1 day ago
> Most software houses spend so much time focusing on how expensive engineering time is that they neglect user time. Software houses optimize for feature delivery and not user interaction time.
I don’t know what you mean by software houses, but every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric
This has been common wisdom for decades. I don’t know how many times I’ve heard the repeated quote about how Amazon loses $X million for every Y milliseconds of page loading time, as an example.
There was a thread here earlier this month,
> Helldivers 2 devs slash install size from 154GB to 23GB
https://news.ycombinator.com/item?id=46134178
Section of the top comment says,
> It seems bizarre to me that they'd have accepted such a high cost (150GB+ installation size!) without entirely verifying that it was necessary!
and the reply to it has,
> They’re not the ones bearing the cost. Customers are.
There was also the GTA wasting minutes to load/parse JSON files at startup. https://nee.lv/2021/02/28/How-I-cut-GTA-Online-loading-times...
And Skylines rendering teeth on models miles away https://www.reddit.com/r/CitiesSkylines/comments/17gfq13/the...
Sometimes the performance is really ignored.
Wasn' there a website with formula on how much time things like the GTA bug costed humanity as a whole? Like 5 minutes × users× sessionsperday accumulated?
It cost several human lifetimes if i remember correctly. Still not as bad as windows update which taking the time times wage has set the gdp of a small nation on fire every year..
I met a seasoned game dev who complained to me he was only ever hired at the end of projects to speed up the code a bunch of mid/junior level game devs the company had used to actually make the game. Basically he said there was only so much time he'd get given, and he'd have to go for low hanging fruit and might miss stuff.
We've only got a couple of game dev shops in my city, so not sure how common that is.
1 reply →
That's not how it works. The demand for engineering hours is an order of magnitude higher than the supply for any given game, you have to pick and choose your battles because there's always much, much more to do. It's not bizarre that nobody verified texture storage was being done in an optimal way at launch, without sacrificing load times at the altar or visual fidelity, particularly given the state the rest of the game was in. Who the hell has time to do that when there are crashes abound and the network stack has to be rewritten at a moments notice?
Gamedev is very different from other domains, being in the 90th percentile for complexity and codebase size, and the 99th percentile for structural instability. It's a foregone conclusion that you will rewrite huge chunks of your massive codebase many, many times within a single year to accomidate changing design choices, or if you're lucky, to improve an abstraction. Not every team gets so lucky on every project. Launch deadlines are hit when there's a huge backlog of additional stuff to do, sitting atop a mountain of cut features.
> It's not bizarre that nobody verified texture storage was being done in an optimal way at launch
The inverse, however, is bizarre. That they spent potentially quite a bit of engineering effort implementing the (extremely non-optimal) system that duplicates all the assets half a dozen time to potentially save precious seconds on spinning rust - all without validating it was worth implementing in the first place.
3 replies →
Gamedev engineering hours are also in endless oversupply thanks to myDreamCream brain.
> It's a foregone conclusion that you will rewrite huge chunks of your massive codebase many, many times within a single year
Tell me you don't work on game engines without telling me..
----
Modern engines are the cumulative result of hundreds of thousands of engine-programmer hours. You're not rewriting Unreal in several years, let alone multiple times in one year. Get a grip dude.
1 reply →
I don't think it's quite that simple. The reason they had such a large install size in the first place was due to concern about the load times for players using HDDs instead of SSDs; duplicating the data was intended to be a way to avoid making some players load into levels much more slowly than others (which in an online multiplayer game would potentially have repercussions for other players as well). The link you give mentions that this was based on flawed data (although it's somewhat light on those details), but that's means the actual cause was a combination of a technical mistake and the presence of care for user experience, just not the experience of the majority at the expense of the smaller but not insignificant minority. There's certainly room for argument about whether this was the correct judgement call to make or that they should have been better at recognizing their data was flawed, but it doesn't really seem like it fits the trends of devs not giving a shit about user experience. If making perfect judgement calls and never having flawed data is the bar for proving you care about users, we might as well just give up on the idea that any companies will ever reach it.
How about GitHub actions with safe sleep that took over a year to accept a trivial PR that fixed a bug that caused actions to hang forever because someone forgot that you need <= instead of == in a counter...
Though in this case GitHub wasn't bearing the cost, it was gaining a profit...
https://github.com/actions/runner/pull/3157
https://github.com/actions/runner/issues/3792
> They’re not the ones bearing the cost. Customers are.
I think this is uncharitably erasing the context here.
AFAICT, the reason that Helldivers 2 was larger on disk is because they were following the standard industry practice of deliberately duplicating data in such a way as to improve locality and thereby reduce load times. In other words, this seems to have been a deliberate attempt to improve player experience, not something done out of sheer developer laziness. The fact that this attempt at optimization is obsolete these days just didn't filter down to whatever particular decision-maker was at the reins on the day this decision was made.
I worked in e-commerce SaaS in 2011~ and this was true then but I find it less true these days.
Are you sure that you’re not the driving force behind those metrics; or that you’re not self-selecting for like-minded individuals?
I find it really difficult to convince myself that even large players (Discord) are measuring startup time. Every time I start the thing I’m greeted by a 25s wait and a `RAND()%9` number of updates that each take about 5-10s.
I have plenty of responses to an angry comment I made several months ago that supports your point.
I made a slight at Word taking like 10 seconds to start and some people came back saying it only takes 2, as if that still isn't 2s too long.
Then again, look at how Microsoft is handling slow File Explorer speeds...
https://news.ycombinator.com/item?id=44944352
I never said that 2s wasn’t too long. I just said your environment was broken if it took 10.
Discord’s user base is 99% people who leave it running 100% of the time, it’s not a typical situation
I think that they make the startup so horrible that people are more likely to leave it running.
3 replies →
I have the same experience on windows. On the other hand, starting up discord on my cachyos install is virtually instant. So maybe there is also a difference between the platform the developers use and that their users use.
Yep, indeed. Which is the main reason I don’t run Discord.
I strongly doubt that. The main reason you don’t run it is likely because you don’t have strong motivation to do so, or you’d push through the odd start up time.
3 replies →
Contrary, every consumer facing product I've worked had no performance metrics tracked. And for enterprise software it was even worse as the end user is not the one who makes a decision to buy and use software.
>>what you mean by software houses
How about Microsoft? Start menu is a slow electron app.
The Start menu is not an Electron app. Don't believe everything you read on the internet.
That makes the usability and performance of the windows start menu even more embarrassing.
The decline of Windows as a user facing product is amazing, especially as they are really good at developing things they care about. The “back of house” guts of Windows has improved alot, for example. They should just have a cartoon Bill Gates pop up like clippy and flip you the bird at this point.
1 reply →
The Start menu is React Native, but Outlook is now an Electron app.
React Native, not Electron. Though it is slower than it was
People believing it says something about the start menu
3 replies →
> How about Microsoft? Start menu is a slow electron app.
If your users are trapped due to a lack of competition then this can definitely happen.
If only community actually gathered around the true Linux distribution instead of endless forks.
> I don’t know how many times I’ve heard the repeated quote about how Amazon loses $X million for every Y milliseconds of page loading time, as an example.
This is true for sites that are trying to make sales. You can quantify how much a delay affects closing a sale.
For other apps, it’s less clear. During its high-growth years, MS Office had an abysmally long startup time.
Maybe this was due to MS having a locked-in base of enterprise users. But given that OpenOffice and LibreOffice effectively duplicated long startup times, I don’t think it’s just that.
You also see the Adobe suite (and also tools like GIMP) with some excruciatingly long startup times.
I think it’s very likely that startup times of office apps have very little impact on whether users will buy the software.
They even made it render the screen but still be unusable to make it look like it was running.
Clearly Amazon doesn't care about that sentiment across the board. Plenty of their products are absurdly slow because of their poor engineering.
The issue here is not tracking, but developing. Like, how do you explain the fact that whole classes of software have gotten worse on those "key metrics"? (and that includes web-selling webpages)
An exception that confirms the rule.
Then why do many software house favor cloud software over on premise?
They often have a recognizable delay to user data input compared to local software
> every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric
Are they evaluating the shape of that line with the same goal as the stonk score? Time spent by users is an "engagement" metric, right?
>I don’t know what you mean by software houses, but every consumer facing software product I’ve worked on has tracked things like startup time and latency for common operations as a key metric.
Then respectfully, uh, why is basically all proprietary software slow as ass?