Comment by Animats

3 months ago

Another failed game project in Rust. This is sad.

I've been writing a metaverse client in Rust for almost five years now, which is too long.[1] Someone else set out to do something similar in C#/Unity and had something going in less than two years. This is discouraging.

Ecosystem problems:

The Rust 3D game dev user base is tiny.

Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues. I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.

The lower levels are buggy and have a lot of churn

The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs. There just aren't enough users to wring out the bugs.

Also, too many different crates want to own the event loop.

These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.

Language problems:

Back-references are difficult

A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.

There are three common workarounds:

- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.

- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.

- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.

Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.

"Is-a" relationships are difficult

Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.

[1] https://www.animats.com/sharpview/index.html

I caveat my remarks with although I've have studed the Rust specification, I have not written a line of Rust code.

I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.

So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.

Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.

In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.

It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.

I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.

I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/

I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).

The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.

  • As discussed multiple times, I see automatic resouce management (written this way on purpose), coupled with effects/linear/affine/dependent types for lowlevel coding as the way to go.

    At least until we get AI driven systems good enough to generate straight binaries.

    Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.

    The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.

    There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.

  • So in D, is it now natural to mix borrow checking and garbage collection? I think some kind of "gradual memory management" is the holy grail, but like gradual typing, there are technical problems

    The issue is the boundary between the 2 styles/idioms -- e.g. between typed code and untyped code, you have either expensive runtime checks, or you have unsoundness

    ---

    So I wonder if these styles of D are more like separate languages for different programs? Or are they integrated somehow?

    Compared with GC, borrow checking affects every function signature

    Compared with manual memory management, GC also affects every function signature.

    IIRC the boundary between the standard library and programs was an issue -- i.e. does your stdlib use GC, and does your program use GC? There are 4 different combinations there

    The problem is that GC is a global algorithm, i.e. heap integrity is a global property of a program, not a local one.

    Likewise, type safety is a global property of a program

    ---

    (good discussion of what programs are good for the borrow checking style -- stateless straight-line code seems to benefit most -- https://news.ycombinator.com/item?id=34410187)

    • > So in D, is it now natural to mix borrow checking and garbage collection?

      I think "natural" is a bit loaded, there is native support in the frontend for doing both. You have to go out of your way to annotate functions with @live and it is still experimental(https://dlang.org/spec/ob.html). The garbage collection is natural and happens if you do nothing, but you can turn it off with proper annotations like @nogc(https://dlang.org/spec/function.html#nogc-functions) or using betterC(https://dlang.org/spec/betterc.html). There is also @safe, @system and @trusted(https://dlang.org/spec/memory-safe-d.html).

      So natural is a stretch at the moment, but you can use all kinds of different techniques, what is needed is more community and library standardization around some solutions.

    • > is it now natural to mix borrow checking and garbage collection?

      D is as memory safe as Rust is, when you use the garbage collector to allocate/free memory. If you don't use the GC in D, then there's a risk from:

          * double frees
          * memory leaks
          * not pairing the allocation with free'ing
      

      Those last 3 is what the borrow checker handles.

      In other words, with D, there is no point to using the borrow checker if one is using D's GC for memory management.

      You can mix and match using the GC or manual memory allocation however it makes sense for your program. It is normal for D programmers to use both.

      1 reply →

    • > "gradual memory management" is the holy grail

      I don't think gradual types are as holy grail as you make them out to be. In gradual typing, if I recall correctly, there was a large overhead when communicating between typed and untyped parts. But further

      But lets say gradual memory management is perfect, you have to keep in mind the costs of having GC + borrow checking.

      First thing, rather than focusing on perfecting GC or borrow checking, you divert your focus.

      Second, you introduce an ecosystem split, with some libraries supporting GC and others supporting non-GC. E.g. you make games in C# and you want to be careful about avoiding GC, good luck finding fast enough non-GC libraries.

      1 reply →

  • I agree with you.

    For me Rust was amazing for writing things like concurrency code. But it slowed me down significantly in tasks I would do in, say, C# or even C++. It feels like the perfect language for game engines, compilers, low-level libraries... but I wasn't too happy writing more complex game code in it using Bevy.

    And you make a good point, it's the same for OOP, which is amazing for e.g. writing plugins but when shoehorned into things it's not good at, it also kills my joy.

  • > I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).

    #4 safer union/enum, I do hope D gets tagged-union/pattern-matching sometimes in the future, I know about std.sumtype, but that's nowhere close to what Rust offer

  • > So I implemented a borrow checker for D...

    D's implementation of a borrow checker, is very intriguing, in terms of possibilities and putting it back into the context of a tool and not the "be all, end all".

    > I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).

    This speaks volumes from such an experienced and accomplished programmer.

  • Hey, thank you for spreading the joy of the borrow checker beyond Rust; awesome stuff, sounds very interesting, challenging, and useful!

    One question that came to mind as a single-track-Rust-mind kind of person: in D generally or in your experience specifically, when you find that the borrow checker doesn't work for a data structure, what is the alternative memory management strategy that you choose usually? Is it garbage collection, or manual memory management without a borrow checker?

    Cheers!

    • Personally, I frankly do not need the borrow checker. I have been writing manual memory management code for so long I have simply internalized how to avoid having problems with it. I've been called arrogant for saying this, but it's true.

      But I still like the borrow checker style of programming because it makes the code easier to understand.

      I find it convenient in the D compiler implementation to use the GC for the AST memory management, as the algorithms that manipulate it are easier if they needn't concern themselves with memory management. A borrow checker approach doesn't fit it comfortably, either.

      Many of the data structures persist to the end of the program, as a compiler is a batch program. No memory management strategy is even necessary for those.

      1 reply →

  • > I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).

    I think these are generally considered table stake in a modern programming language? That's why people are/were excited by the borrow checker, as data races are the next prominent source of memory corruption, and one that is especially annoying to debug.

I saw a good talk, though I don't remember the name, that went over the array-index approach. It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.

  • > It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.

    I've gone back and forth on this, myself.

    I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.

    Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.

    Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)

    I think this is indeed peak rust.

    It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.

    Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!

    • >Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)

      Cache locality matters, but so does having less allocator pressure. Use 32-bit unsigned ints as indices, and you get improvements on that as well.

      >The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.

      I'd always try to avoid that type of allocation pattern in C++, FWIW :-).

    • > Recently I rewrote the b-tree to simply use a vec of internal nodes

      Doesn't this also require you to correctly and efficiently implement (equivalents of C's) malloc() and free()? IIUC your requirements are more constrained, in that malloc() will only ever be called with a single block size, meaning you could just maintain a stack of free indices -- though if tree nodes are comparable in size to integers this increases memory usage by a significant fraction.

      (I just checked and Rust has unions, but they require unsafe. So, on pain of unsafe, you could implement a "traditional" freelist-based allocator that stores the index of the next free block in-place inside the node.)

      2 replies →

    • GC languages like C# don't need these tricks, because it is feature rich enough to do C++ style low level programming, and has value types.

    • Having gone full-in on this approach before, with some good success, it still feels wrong to me today. Contiguous storage may work for reasonable numbers of elements, but it's potentially blocking a huge contiguous chunk of address space especially for large numbers of elements.

      I probably say this because I still have to main 32-bit binaries (only 2G of address space), but it can potentially be problematic even on 64-bit machines (typically 256 TB of address space), especially if the data structure should be a reusable container with unknown number of instances. If you don't know a reasonable upper bound of elements beforehand, you have to reallocate later, or drastically over-reserve from the start. The former removes a pointer stability guarantee, the later is uneconomical, it may even be uneconomical on 64-bit depending on how many instances of the data structures you plan to have. And having to reallocate when overflowing the preallocated space makes operations less deterministic with regards to execution time.

      1 reply →

    • > What it doesn't protect you from is use-after-free bugs.

      Yes. I've found that problem in index-allocated code.

      Also, when you do this, you need an allocator for the indexes. I've found bugs in those.

    • One can also use this array-index approach in C++, utilize the `at` methods and have "memory safety guarantees", no ?

    • > What it doesn't protect you from is use-after-free bugs.

      How about using hash maps/hash tables/dictionaries/however it's called in Rust? You could generate unique IDs for the elements rather than using vector indices.

  • But Unity game objects are the same way: you allocate them when they spawn into the scene, and you deallocate them when they despawn. Accessing them after you destroyed them throws an exception. This is exactly the same as entity IDs! The GC doesn't buy you much, other than memory safety, which you can get in other ways (e.g. generational indices, like Bevy does).

    • But in rust you have to fight the borrow checker a lot, and sometimes concede, with complex referential stuff. I say this as someone who writes a good bit of rust and enjoys doing so.

      40 replies →

    • You can't do possibly-erroneous pointer math on a C# object reference. You don't need to deal with the game life cycle AND the memory life cycle with a GC. In Unity they free the native memory when a game object calls Destroy() but the C# data is handled by the GC. Same with any plain C# objects.

      To say it's the same as using array indices is just not true.

      4 replies →

    • Yes but regarding use of uninitialized/freed memory, neither GC nor memory safety really help. Both "only" help with totally incidental and unintentional and small scale violations.

> > The lower levels are buggy and have a lot of churn > > The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan

The same is true if you try to make GUI applications in Rust. All the toolkits have lots of quirky bugs and broken features.

The barrier to contributing to toolkits is usually also pretty high too: most of them focus on supporting a variety of open source and proprietary platforms. If you want to improve on something which requires some API change, you need to understand the details of all the other platforms — you can't just make a change for a single one.

Ultimately, cross-platform toolkits always offer a lowest common denominator (or "the worst of all worlds"), so I think that this common focus in the Rust ecosystem of "make everything run everywhere" ends up being a burden for the ecosystem.

> > Back-references are difficult > > A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.

When I code Rust, I'm always hesitant to use an Arc because it adds an overhead. But if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc. It's just implicit so we forget about it.

We really need to be more liberal in our usage of Arc and stop seeing it as "it has overhead". Any higher level language has the same overhead, it's just not declared explicitly.

  • Arc is a very slow and primitive tool compared to a GC. If you are writing Arc everywhere, you would probably have better performance switching to a JVM language, C#, or Go.

    • This is incorrect if you are using Rc exclusively for back references. Since the back reference is weak, the reference count is only incremented once when you are creating the datatype. The problem isn't that it's slow, it's that it consumes extra memory for book keeping.

    • I warned that one extreme (being afraid to use Arc is necessary) is bad.

      I agree with you: the other extreme (using Arc everywhere) is also bad.

      There's a sweet middle spot of using it just when strictly necessary.

  • Objects are cheaper than Arc<T>. Otherwise using GC would suck a lot more than it does today (for certain types of data structures like trees accessed concurrently it is also a massive optimization).

    Python also has incomparably worse performance than Java or C#, both of which can do many object-based optimizations and optimize away their allocation.

  • The "if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc" is not accurate. Rust Arc involves atomic operation and its preformance can greatly degrade when the reference count is being mutated by many threads. See https://pkolaczk.github.io/server-slower-than-a-laptop/

    Java, C# and Go don't use atomic reference counting and don't have such overhead.

We've got another one on our end. It's much more to do with Bevy than Rust, though. And I wonder if we would have felt the same if we had chosen Fyrox.

> Migration - Bevy is young and changes quickly.

We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.

> The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.

This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.

  • > We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice.

    I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.

> These crates also get "refactored" every few months, with breaking API changes

I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.

  • I’ve found the key to the JS ecosystem is to be very picky about what dependencies you use. I’ve got a number of vanilla Bun projects that only depend on TypeScript (and that is only a dev dependency).

    It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.

    So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.

    • I'm finding most of the modern React ecosystem to be made of liabilities.

      The constant update cycles of some libraries (hello Router) is problematic in itself, but there's too many fashionable things that sound very good in theory but end up being a huge problem when used in fast-moving projects, like headless UI libraries.

    • "I’ve found the key to the JS ecosystem is to be very picky about what dependencies you use"

      Well, I always thought it is the key in every kind of development, JS or else.

  • I wish for ecosystems that would let maintainers ship deprecations with auto-fixing lint rules.

    • Yeah, not only is the structure of business workflows often resistant to mature software dev workflows, developers themselves increasingly lack the discipline, skills or interest in backwards compatibility or good initial designs anyway. Add to this the trend that fast changing software is actually a decent strategy to keep LLMs befuddled, and it’s probably going to become an unofficial standard to maintain support contracts.

      On that subject, ironically code gen by ai for ai related work is often least reliable due to fast churn. Langchain is a good example of this and also kind of funny, they suggest / integrate gritql for deterministic code transforms rather than using AI directly: https://python.langchain.com/docs/versions/v0_3/.

      Overall.. mastering things like gritql, ast grep, and CST tools for code transforms still pays off. For large code bases, No matter how good AI gets, it is probably better to get them to use formal/deterministic tools like these rather than trust them with code transformations more directly and just hope for the best..

      1 reply →

    • Modelica, which is a DSL for modelling DAE systems, has a facility of automated conversions. You can provide a script that automatically modifies user's code then they upgrade to newer version of your lib, or prints the message if automatic migration is not possible.

      It is very strange that more mainstream languages do not have such features (and I am not talking about 3rd party tools; in Modelica conversions are part of the language spec).

    • Kotlin has some limited support for that:

          @Deprecated(replaceWith = "new expression")
      

      Only works for simple cases but better than nothing. For more, there's OpenRewrite.

  • I’ve found such changes can actually be a draw at first. “Hey look, progress and activity!”. Doubly so as a primarily C++ dev frustrated with legacy choices in stl. But as you and others point out, living with these changes is a huge pain.

One thing that struck me was the lavish praise heaped on the ECS of the game engine being migrated away from; this is extremely common.

I think when it comes to game dev, people fixate on the engine having an ECS and maybe don't pay enough attention to the other aspects of it being good for gamedev, like... being a very high level language that lets you express all the game logic (C# with coroutines is great at this, and remains a core strength of Unity; Lua is great at this; Rust is ... a low level systems language, lol).

People need to realise that having ECS architecture isn't the only thing you need to build games effectively. It's a nice way to work with your data but it's not the be-all and end-all.

And some critical rust issues for games are not dealt with: on tiny glade with the devs did hit a libgcc issue on the native elf/linux build, and we did discovered that the rust toolchain for elf/linux targets does not support the static linking of libgcc (which is mandatory for games, any closed source binary). The issue is opened on rust github since 2015...

But the real issue is the game devs do not know the gnu toolchain (and llvm based) does default to open source software building for elf/linux targets, and that there is more work, ABIs related, to do for game binaries on those platforms.

Not a game dev, but based on what I do know of it, some of this sounds to me like it's just a severe mismatch between Rust's memory model and the needs of games.

Individually managing the lifetime of every single item you allocate on the heap and fine-grained tracking of ownership of everything on both the heap and the stack makes a lot of sense to me for more typical "line of business" tools that have kind of random and unpredictable workloads that may or may not involve generating arbitrarily complex reference graphs.

But everything I've seen & read of best practices for game development, going all the way back to when I kept a heavily dogeared copy of Michael Abrash's Black Book close at hand while I made games for fun back in the days when you basically had to write your own 3D engine, tells me that's not what a game engine wants. What a game engine wants, if anything, is something more like an arena allocator. Because fine-grained per-item lifetime management is not where you want to be spending your innovation tokens when the reality is that you're juggling 500 megabyte lumps of data that all have functionally the same lifetime.

Great write-up. I do the array indexing, and get runtime errors by misindexing these more often than I'd like to admit!

I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.

  • I've always thought about this. In my mind there are two ways a language can guarantee memory safety:

    * Simply check all array accesses and pointer de references and panic if we are out of bounds and panic/throw an exception/etc. if we are doing something wrong.

    * Guarantee at compile-time that we are always accessing valid memory, to prevent even those panics.

    Rust makes a lot of effort to reach the second goal, but, since it gives you integers and arrays, it makes the problem fundamentally insoluble.

    The memory it wants so hard to regulate access to is just an array, and a pointer is just an index.

    • Rust has plenty of constructs that do runtime checks in part to get around the fact that not everything can be expressed in a manner that the borrow checker can understand at compile time. IMO Rust should treat the array/index case in the same manner as these and provide a standard interface that prevents "use after free" and so on.

  • > I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.

    Yes.

    Three months ago, when the Rust graphics stack achieved sync, I wrote a congratulatory note.[1]

        Everybody is in sync!
    
            wgpu 24
            egui 0.31
            winit 0.30
    
        all play well together using the crates.io versions. No patch overrides! Thanks, everybody.
    

    Wgpu 25 is now out, but the others are not in sync yet. Maybe this summer.

    [1] https://www.reddit.com/r/rust_gamedev/comments/1iiu3mr/every...

> These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.

This was a problem with early versions of Scala as well, exacerbated by the language and core libs shifting all the time. It got so difficult to keep things up to date with all the cross compatibility issues that the services written in it ended up stuck on archaic versions of old libraries. It was a hard lesson in if you're doing a non-hobby project, avoid languages and communities that behave like this until they've finally stabilized.

This is probably brought up whenever an article mentions “wasted time”, but I wonder what percentage of side and “main” software projects “fail”- so we have to define side vs main and what it means to fail (I would imagine failure looks different for each), but anecdotally, none of my side projects have made money, but at least one I would call “done”, so… success?

A fear I have with larger side projects is the notion that it could all be for nought, though I suppose that’s easily-mitigated by simply keeping side projects small, iterative if necessary. Start with an appropriate-sized MVP, et al.

> Nobody has really pushed the performance issues.

This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.

[1]: https://bevyengine.org/news/bevy-0-16/

  • I read the original post as saying that no one has pushed the engine to the extent a completed AAA game would in order to uncover performance issues, not that performance is bad or that Bevy devs haven’t worked hard on it.

  • Wonderful work!

    ...although the fact that a 3x speed improvement was available kind of proves their point, even if it may be slightly out of date.

    • Most game engines other than the latest in-house AAA engines are leaving comparable levels of performance on the table on scenes that really benefit from GPU-driven rendering (that's not to say all scenes, of course). A Google search for [Unity drawcall optimization] will show how important it is. GPU-driven rendering allows developers to avoid having to do all that optimization manually, which is a huge benefit.

      1 reply →

Why is this sad? He's realized that the best language is C# and the best platform for games is Unity! This is progress, and that's good.

A owns B, and B can find A

I think you should think less like Java/C# and more like database.

If you have a Comment object that has parent object, you need to store the parent as a 'reference', because you can't put the entire parent.

So I'll probably use Box here to refer to the parent

  • ?? the whole point of Box<T> is to be an owning reference, you can’t have multiple children refer to the same parent object if you use a Box

  • If you use Box to refer to parent, then parent cannot own the child (unless using things like Arc<Mutex<>>).

> Someone else set out to do something similar in C#/Unity and had something going in less than two years.

But in that case doesn't the garbage collector ruin the experience for the user? Because that's the argument I always hear in favor of Rust.

  • For a while now Unity has an incremental garbage collector where you pay a small amount of time per frame instead of introducing large pauses every time the GC kicks in.

    Even without the incremental GC it's manageable and it's just part of optimising the game. It depends on the game but you can often get down to 0 allocations per frame by making using of pooling and no alloc APIs in the engine.

    You also have the tools to pause GC so if you're down to a low amount of allocation you can just disable the GC during latency sensitive gameplay and re-enable and collect on loading/pause or other blocking screens.

    Obviously its more work than not having to deal with these issues but for game developers its probably a more familiar topic than working with the borrow checker and critically allows for quicker iteration and prototyping.

    Finding the fun and time to market are top priority for games development.

    • At this point I really wonder why anyone would use Rust for anything other than low-level system tools/libraries or kernel development ...

      Anything with a graphical shell is probably better written in a GC'd language, but I'd love to hear some counter-arguments.

      3 replies →