← Back to context

Comment by jy14898

24 days ago

The post stated that it was believed duplication improved loading times on computers with HDDs rather than SSDs

Which is true. It’s an old technique going back to CD games consoles, to avoid seeks.

  • Is it really possible to control file locations on HDD via Windows NTFS API?

    • No, not at all. But by putting every asset a level (for example) needs in the same file, you can pretty much guarantee you can read it all sequentially without additional seeks.

      That does force you to duplicate some assets a lot. It's also more important the slower your seeks are. This technique is perfect for disc media, since it has a fixed physical size (so wasting space on it is irrelevant) and slow seeks.

      7 replies →

    • Not really. But when you write a large file at once (like with an installer), you'll tend to get a good amount of sequential allocation (unless your free space is highly fragmented). If you load that large file sequentially, you benefit from drive read ahead and OS read ahead --- when the file is fragmented, the OS will issue speculative reads for the next fragment automatically and hide some of the latency.

      If you break it up into smaller files, those are likely to be allocated all over the disk; plus you'll have delays on reading because windows defender makes opening files slow. If you have a single large file that contains all resources, even if that file is mostly sequential, there will be sections that you don't need, and read ahead cache may work against you, as it will tend to read things you don't need.

Key word is "believed". It doesn't sound like they actually benchmarked.

  • There is nothing to believe. Random 4K reads for HDD is slow.

    • I assume asset reads nowadays are much heavier than 4 kB though, specially if assets meant to be loaded together are bundled together in one file. So games now should be spending less time seeking relative to their total read size. Combined with HDD caches and parallel reads, this practice of duplicating over 100 GBs across bundles is most likely a cargo-cult by now.

      Which makes me think: Has there been any advances in disk scheduling in the last decade?

Who cares? I've installed every graphically intensive game on SSDs since the original OCZ Vertex was released.

  • Their concern was that one person in a squad loading on HDD could slow down the level loading for all players in a squad, even if they used a SSD, so they used a very normal and time-tested optimisation technique to prevent that.

    • Their technique makes it so that the normal person with a ~base SSD of 512 GB can't reasonably install the game. Heck of a job Brownie.

      2 replies →