Comment by pcwalton
9 months ago
Here's a trace of a Bevy demo: https://i.imgur.com/oXUxC2h.png
You can see that all the CPUs are being maxed out. This actually does result in significant FPS increases. Does it matter for every game? No. But it does result in better performance!
>> but the ability of engines like Bevy to analyze system dependencies and automatically scale to multiple CPUs is a big deal
>> Is it? The article addresses that, and basically calls it a pointless feature
> You can see that all the CPUs are being maxed out.
You're missing the forest for the trees - the poster above basically said "seeing all the CPUs being maxed out is a pointless feature" and you reply with "but see, all the CPUs are being maxed out".
You're literally ignoring the complaint and replying with marketing.
No, the original article said that you don't get parallelism from Bevy in practice:
> Unfortunately, after all the work that one has to put into ordering their systems it's not like there is going to be much left to parallelize. And in practice, what little one might gain from this will amount to parallelizing a purely data driven system that could've been done trivially with data parallelism using rayon.
It's not saying "yes, you get parallelism, but I don't need the performance"; it's claiming that in practice you don't get (system-level) parallelism at all. That's at odds with my experience.
The article is not saying that Bevy does not parallelize but that the impredictability of parallelism (both in ordering and in timing) forces the developer to add enough dependency constraints that there is not much left to parallelize.
2 replies →
To be fair, you've posted a toy example. Real games are often chains of dependent systems, and as complexity increases, clean threading opportunities decrease.
So, while yes it's nice in theory, in practice it often doesn't add as much performance as you'd expect.
The problem is that most of the gameplay code is linear, and people have already gotten good at splitting parallel work across threads. Serious physics engines (see jolt) are already designed to run on another thread and distribute the work across multiple cores. The main part of graphics drivers when using opengl or vulkan run on another thread and the UI you access just passes data to it. Rust's parallelism hasn't proven to be faster than C/C++, let alone less annoying to achieve.
Among those who have tried both, I can confidently say that the idea that C/C++ parallelism is as easy to achieve as parallelism in Rust is very much a minority view. There's a reason why nobody tried to parallelize CSS styling in a production browser before Stylo came along.
I'm talking about games specifically. I don't know much about the needs of web browsers.
2 replies →
The context of this article, and my comment, is game development. Not game performance or engine optimization, which while related seem like related but smaller aspects of the overall topic.
The way I interpret the claims is that bevy is putting far too much focus on performance and multi-threading when by far the important thing to focus on for game development is allowing the actual game developers to rapidly iterate.
Bevy might be very fast and performant, but if that seems to have come at the cost of (or been optimized for over) features that make it easier to use in ways that game developers need, then the criticism may have merit. Whether that's true or not I don't know, but hopefully that explains why a response about how it definitely can use lots of threads and make good use of many cores isn't really seen as a good rebuttal to the criticisms leveled.
That makes no difference if the game is boring.
You could say that about any game engine. Are you suggesting Bevy should try and not optimise performance because perf and fun are not correlated?
I'm saying that making it easy to experiment with different gameplay mechanics is far more important than making it the most efficient. Even more so in case of small studios.