← Back to context

Comment by Animats

3 months ago

That's a good comment.

The difference is that I'm writing a metaverse client, not a game. A metaverse client is a rare beast about halfway between an MMO client and a web browser. It has to do most of the the graphical things a 3D MMO client does. But it gets all its assets and gameplay instructions from a server.

From a dev perspective, this means you're not making changes to gameplay by recompiling the client. You make changes to objects in the live world while you're connected to the server. So client compile times (I'm currently at about 1 minute 20 seconds for a recompile in release mode) aren't a big issue.

Most of the level and content building machinery of Bevy or Unity or Unreal Engine is thus irrelevant. The important parts needed for performance are down at the graphics level. Those all exist for Rust, but they're all at the My First Renderer level. They don't utilize the concurrency of Vulkan or multiple CPUs. When you get to a non-trivial world, you need that. Tiny Glade is nice, but it works because it's tiny.

What does matter is high performance and reliability while content is coming in at a high rate and changing. Anything can change at any time, but usually doesn't. So cache type optimizations are important, as is multithreading to handle the content flood. Content is constantly coming in, being displayed, and then discarded as the user moves around the big world. All that dynamism requires more complex data structures than a game that loads everything at startup.

Rust's "fearless multiprogramming" is a huge win for performance. I have about 20 threads running, and many are doing quite different things. That would be a horror to debug in C++. In Rust, it's not hard.

(There's a school of thought that says that fast, general purpose renderers are impossible. Each game should have its own renderer. Or you go all the way to a full game engine and integrate gameplay control and the scene graph with the renderer. Once the scene graph gets big enough that (lights x objects) becomes too large to do by brute force, the renderer level needs to cull based on position and size, which means at least a minimal scene graph with a spatial data structure. So now there's an abstraction layering problem - the rendering level needs to see the scene graph. No one in Rust land has solved this problem efficiently. Thus, none of the four available low-level renderers scale well.

I don't think it's impossible, just moderately difficult. I'm currently looking at how to do this efficiently, with some combination of lambdas which access the scene graph passed into the renderer, and caches. I really wish someone else had solved this generic problem, though. I'm a user of renderers, not a rendering expert.)

Meta blew $40 billion dollars on this problem and produced a dud virtual world, but some nice headsets. Improbable blew upwards of $400 million and produced a limited, expensive to run system. Metaverses are hard, but not that hard. If you blow some of the basic architectural decisions, though, you never recover.