I don't think this is the real explanation. If they gave the filesystem a list of files to fetch in parallel (async file IO), the concept of "seek time" would become almost meaningless. This optimization will make fetching from both HDDs and SSDs faster. They would be going out of their way to make their product worse for no reason.
Solid state drives tend to respond well to parallel reads, so it's not so clear. If you're reading one at a time, sequential access is going to be better though.
But for a mechanical drive, you'll get much better throughput on sequential reads than random reads, even with command queuing. I think earlier discussion showed it wasn't very effective in this case and taking 6x the space for a marginal benefit for the small % of users with mechanical drives isn't worth while...
If they fill your harddrive youre less likely to install other games. If you see a huge install size youre less likely to uninstall with plans to reinstall later because thatd take a long time.
>If they gave the filesystem a list of files to fetch in parallel (async file IO)
This does not work if you're doing tons of small IO and you want something fast.
Lets say were on a HDD with 200IOPS and we need to read 3000 small files randomly across the hard drive.
Well, at minimum this is going to take 15's seconds plus any additional seek time.
Now, lets say we zip up those files in a solid archive. You'll read it in half a second. The problem comes in when different levels all need different 3000 files. Then you end deduping a bunch of stuff.
Now, where this typically falls apart for modern game assets is they are getting very large which tends to negate seek times by a lot.
The idea is to duplicate assets so loading a "level" is just sequential reading from the file system. It's required on optical media and can be very useful on spinning disks too. On SSDs it's insane. The logic should've been the other way around. Do a speed test on start an offer to "optimise for spinning media" if the performance metrics look like it would help.
If the game was ~20GB instead of ~150GB almost no player with the required CPU+GPU+RAM combination would be forced to put it on a HDD instead of a SSD.
This idea of one continuous block per level dates back to the PS1 days.
Hard drives are much, much faster than optical media - on the order of 80 seeks per second and 300 MB/s sequential versus, like, 4 seeks per second and 60 MB/s sequential (for DVD-ROM).
You still want to load sequential blocks as much as possible, but you can afford to have a few. (Assuming a traditional engine design, no megatextures etc) you probably want to load each texture from a separate file, but you can certainly afford to load a block of grass textures, a block of snow textures, etc. Also throughput is 1000x higher than a PS1 (300 kB/s) so you can presumably afford to skip parts of your sequential runs.
Their concern was that one person in a squad loading on HDD could slow down the level loading for all players in a squad, even if they used a SSD, so they used a very normal and time-tested optimisation technique to prevent that.
They duplicate files to reduce load times. Here's how Arrowhead Game Studios themselves tell it:
https://www.arrowheadgamestudios.com/2025/10/helldivers-2-te...
I don't think this is the real explanation. If they gave the filesystem a list of files to fetch in parallel (async file IO), the concept of "seek time" would become almost meaningless. This optimization will make fetching from both HDDs and SSDs faster. They would be going out of their way to make their product worse for no reason.
Solid state drives tend to respond well to parallel reads, so it's not so clear. If you're reading one at a time, sequential access is going to be better though.
But for a mechanical drive, you'll get much better throughput on sequential reads than random reads, even with command queuing. I think earlier discussion showed it wasn't very effective in this case and taking 6x the space for a marginal benefit for the small % of users with mechanical drives isn't worth while...
1 reply →
If they fill your harddrive youre less likely to install other games. If you see a huge install size youre less likely to uninstall with plans to reinstall later because thatd take a long time.
1 reply →
>If they gave the filesystem a list of files to fetch in parallel (async file IO)
This does not work if you're doing tons of small IO and you want something fast.
Lets say were on a HDD with 200IOPS and we need to read 3000 small files randomly across the hard drive.
Well, at minimum this is going to take 15's seconds plus any additional seek time.
Now, lets say we zip up those files in a solid archive. You'll read it in half a second. The problem comes in when different levels all need different 3000 files. Then you end deduping a bunch of stuff.
Now, where this typically falls apart for modern game assets is they are getting very large which tends to negate seek times by a lot.
3 replies →
The technique has the most impact on games running off physical disc.
It's a well known technique but happened to not be useful for their use case.
"97% of the time: premature optimization is the root of all evil."
[dead]
The idea is to duplicate assets so loading a "level" is just sequential reading from the file system. It's required on optical media and can be very useful on spinning disks too. On SSDs it's insane. The logic should've been the other way around. Do a speed test on start an offer to "optimise for spinning media" if the performance metrics look like it would help.
If the game was ~20GB instead of ~150GB almost no player with the required CPU+GPU+RAM combination would be forced to put it on a HDD instead of a SSD.
This idea of one continuous block per level dates back to the PS1 days.
Hard drives are much, much faster than optical media - on the order of 80 seeks per second and 300 MB/s sequential versus, like, 4 seeks per second and 60 MB/s sequential (for DVD-ROM).
You still want to load sequential blocks as much as possible, but you can afford to have a few. (Assuming a traditional engine design, no megatextures etc) you probably want to load each texture from a separate file, but you can certainly afford to load a block of grass textures, a block of snow textures, etc. Also throughput is 1000x higher than a PS1 (300 kB/s) so you can presumably afford to skip parts of your sequential runs.
I meant to write that you probably DON'T want to load each texture from a separate file, but it would be fine to have them in blocks.
The post stated that it was believed duplication improved loading times on computers with HDDs rather than SSDs
Which is true. It’s an old technique going back to CD games consoles, to avoid seeks.
Is it really possible to control file locations on HDD via Windows NTFS API?
9 replies →
Key word is "believed". It doesn't sound like they actually benchmarked.
There is nothing to believe. Random 4K reads for HDD is slow.
1 reply →
Who cares? I've installed every graphically intensive game on SSDs since the original OCZ Vertex was released.
Their concern was that one person in a squad loading on HDD could slow down the level loading for all players in a squad, even if they used a SSD, so they used a very normal and time-tested optimisation technique to prevent that.
3 replies →