Comment by pcwalton
9 months ago
No, the original article said that you don't get parallelism from Bevy in practice:
> Unfortunately, after all the work that one has to put into ordering their systems it's not like there is going to be much left to parallelize. And in practice, what little one might gain from this will amount to parallelizing a purely data driven system that could've been done trivially with data parallelism using rayon.
It's not saying "yes, you get parallelism, but I don't need the performance"; it's claiming that in practice you don't get (system-level) parallelism at all. That's at odds with my experience.
The article is not saying that Bevy does not parallelize but that the impredictability of parallelism (both in ordering and in timing) forces the developer to add enough dependency constraints that there is not much left to parallelize.
The fact that 100% of the CPU is being used, and multiple systems are executing in parallel, shows otherwise.
Both I and the author agree with that, but it was not the point:
> the impredictability of parallelism (both in ordering and in timing) forces the developer to add [...] dependency constraints
The fact that it is possible to make a benchmark without hitting this problem does nothing to prove that bigger games can avoid it too.
Hitting 100% CPU means nothing unless those cores are doing things you actually want them to do.
To be fair, you've posted a toy example. Real games are often chains of dependent systems, and as complexity increases, clean threading opportunities decrease.
So, while yes it's nice in theory, in practice it often doesn't add as much performance as you'd expect.