Comment by usrusr

2 months ago

When seeing numbers like that I always wonder about the probability of the engines truly running the same feature set: not because I doubt that an alternative implementation can be faster (I don't) and certainly not because I'd want to accuse anyone of trickery, but because those software stacks are just so very, very complex and adaptive and minor (but, perhaps, costly in fps) adaptions of the engine parametrization to the runtime environment can be very difficult to spot by looking at the output. I'd imagine the relationship between the code and what's actually happening must be almost as wild and unpredictable as what we see in epigenetics.

How would the engines be running differently? It’s quite literally the same Windows build of the game running on Wine/Proton.

The simpler explanation is that Windows is poorly optimized, and runs a ton of inefficient crap in the background that’s taking cycles away from the CPU and maybe even GPU.

  • I can imagine the feature set the native Windows driver advertises is different from the feature set DXVK reports. If DXVK doesn't support or implement certain features that the engine then doesn't request of it (or if the API call translates to a noop), you could quite easily end up in a scenario where graphic fidelity is lowered, accidentally boosting battery life and performance. There were bugs where certain games wouldn't render certain shadows, for instance, which might not get noticed but still offers an accidental performance uplift.

    • I don’t see how something like that would happen consistently across multiple titles.

  • My understanding is there are two performance differences here, one is vulkan outperforming DirectX, and one is reduced overhead on linux vs windows.

    The Vulkan difference can actually be achieved on windows as well, by using DXVK on windows.

    The overhead difference is most seen on the lighter games, like Dead Cells and Hades. It's why we see such massive increases in battery life for those games.

  • I think one of the answers here may be nVidia related. CP2077 is one of those games with a ton of "optimized for/by nVidia" magic with the nVidia driver injecting a lot of its own GPU/CPU code adjustments into the game. Given the Linux nVidia driver is a fraction of the size of the Windows driver I've long assumed there's a lot less "optimized" code being sent to the Linux side than the Windows side. It's interesting to wonder if it is because Linux needs it less or if Windows is still the flagship for "most features"/"best graphics" for nVidia so DirectX gets all the attention.

    • This device has AMD for both CPU ane GPU, so nVidia optimized stuff would not be a factor..

  • Even on same builds, I could see some calls being performed, and the translation while functionally equivalent, not quite computationally equivalent

    It doesn't even have to be worse, just different

That would be plausible if the effect was only seen in one game, or a small handful. Instead it's happening across the board, with almost all games tested showing at least some gain, and many showing gains comparable to CP2077.

At that point it has to be a platform/stack difference.