← Back to context

Comment by maccard

16 days ago

I've been involved in decisions like this that seem stupid and obvious. There's a million different things that could/should be fixed, and unless you're monitoring this proactively you're unlikely to know it hsould be changed.

I'm not an arrowhead employee, but my guess is at some point in the past, they benchmarked it, got a result, and went with it. And that's about all there is to it.

They admitted to testing nothing, they just [googled it].

To be fair, the massive install size was probably the least of the problems with the game, it's performance has been atrocious, and when they released for xbox, the update that came with it broke the game entirely for me and was unplayable for a few weeks until they released another update.

In their defense, they seem to have been listening to players and have been slowly but steadily improving things.

Playing Helldivers 2 is a social thing for me where I get together online with some close friends and family a few times a month and we play some helldivers and have a chat, aside from that period where I couldn't play because it was broken, it's been a pretty good experience playing it on Linux; even better since I switched from nvidia to AMD just over a week ago.

I'm glad they reduced the install size and saved me ~130GB, and I only had to download about another 20GB to do it.

Performance profiling should be built into the engine and turned on at all times. Then this telemetry could be streamed into a system that tracks it across all builds, down to a specific scene. It should be possible to click a link on the telemetry server and start the game at that exact point.

>These loading time projections were based on industry data - comparing the loading times between SSD and HDD users where data duplication was and was not used. In the worst cases, a 5x difference was reported between instances that used duplication and those that did not. We were being very conservative and doubled that projection again to account for unknown unknowns.

>We now know that, contrary to most games, the majority of the loading time in HELLDIVERS 2 is due to level-generation rather than asset loading. This level generation happens in parallel with loading assets from the disk and so is the main determining factor of the loading time. We now know that this is true even for users with mechanical HDDs.

they did absolutely zero benchmarking beforehand, just went with industry haresay, and decided to double it just in case.

  • Nowhere in that does it say “we did zero benchmarking and just went with hearsay”. Basing things on industry data is solid - looking at the steam hardware surveys if a good way to figure out the variety of hardware used without commissioning your own reports. Tech choices are no different.

    Do you benchmark every single decision you make on every system on every project you work on? Do you check that redis operation is actually O(1) or do you rely on hearsay. Do you benchmark every single SQL query, every DTO, the overhead of the DI Framework, connection pooler, json serializer, log formatter? Do you ever rely on your own knowledge without verifying the assumptions? Of course you do - you’re human and we have to make some baseline assumptions, and sometimes they’re wrong.

  • They made a decision based on existing data. This isn't unreasonable as you are pretending, especially as PC hardware can be quite diverse.

    You will be surprised what some people are playing games on. e.g. I know people that still use Windows 7 on a AMD BullDozer rig. Atypical for sure, but not unheard of.

    • i believe it. hell i'm in F500 companies and virtually all of them had some legacy XP / Server 2000 / ancient Solaris box in there.

      old stuff is common, and doubly so for a lot of the world, which ain't rich and ain't rockin new hardware

      1 reply →

  • >they did absolutely zero benchmarking beforehand, just went with industry haresay, a

    https://en.wikipedia.org/wiki/Wikipedia:Chesterton%27s_fence

    It was a real issue in the past with hard drives and small media assets. It's still a real issue even with SSDs. HDD/SSD IOPS are still way slower than contiguous reads when you're dealing with a massive amount of files.

    At the end of the day it requires testing which requires time at a time you don't have a lot of time.

    • This is not a good invokation of Chesterton's Fence.

      The Fence is a parable about understanding something that already exists before asking to remove it. If you cannot explain why it exists, you shouldn't ask to remove it.

      In this case, it wasn't something that already existed in their game. It was something that they read, then followed (without truly understanding whether it applied to their game), and upon re-testing some time later, realized it wasn't needed and caused detrimental side-effects. So it's not Chesterton's Fence.

      You could argue they followed a videogame industry practice to make a new product, which is reasonable. They just didn't question or test their assumptions that they were within the parameters of said industry practice.

      I don't think it's a terrible sin, mind you. We all take shortcuts sometimes.

    • It's not an issue with asynchronous filesystem IO. Again, async file IO should be the default for game engines. It doesn't take a genius to gather a list of assets to load and then wait for the whole list to finish rather than blocking on every tiny file.

      1 reply →