Migrating away from Rust

3 months ago (deadmoney.gg)

Another failed game project in Rust. This is sad.

I've been writing a metaverse client in Rust for almost five years now, which is too long.[1] Someone else set out to do something similar in C#/Unity and had something going in less than two years. This is discouraging.

Ecosystem problems:

The Rust 3D game dev user base is tiny.

Nobody ever wrote an AAA title in Rust. Nobody has really pushed the performance issues. I find myself having to break too much new ground, trying to get things to work that others doing first-person shooters should have solved years ago.

The lower levels are buggy and have a lot of churn

The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan. Except for Vulkan, they've all had hard to find bugs. There just aren't enough users to wring out the bugs.

Also, too many different crates want to own the event loop.

These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.

Language problems:

Back-references are difficult

A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.

There are three common workarounds:

- Architect the data structures so that you don't need back-references. This is a clean solution but is hard. Sometimes it won't work at all.

- Put everything in a Vec and use indices as references. This has most of the problems of raw pointers, except that you can't get memory corruption outside the Vec. You lose most of Rust's safety. When I've had to chase down difficult bugs in crates written by others, three times it's been due to errors in this workaround.

- Use "unsafe". Usually bad. On the two occasions I've had to use a debugger on Rust code, it's been because someone used "unsafe" and botched it.

Rust needs a coherent way to do single owner with back references. I've made some proposals on this, but they require much more checking machinery at compile time and better design. Basic concept: works like "Rc::Weak" and "upgrade", with compile time checking for overlapping upgrade scopes to insure no "upgrade" ever fails.

"Is-a" relationships are difficult

Rust traits are not objects. Traits cannot have associated data. Nor are they a good mechanism for constructing object hierarchies. People keep trying to do that, though, and the results are ugly.

[1] https://www.animats.com/sharpview/index.html

  • I caveat my remarks with although I've have studed the Rust specification, I have not written a line of Rust code.

    I was quite intrigued with the borrow checker, and set about learning about it. While D cannot be retrofitted with a borrow checker, it can be enhanced with it. A borrow checker has nothing tying it to the Rust syntax, so it should work.

    So I implemented a borrow checker for D, and it is enabled by adding the `@live` annotation for a function, which turns on the borrow checker for that function. There are no syntax or semantic changes to the language, other than laying on a borrow checker.

    Yes, it does data flow analysis, has semantic scopes, yup. It issues errors in the right places, although the error messages are rather basic.

    In my personal coding style, I have gravitated towards following the borrow checker rules. I like it. But it doesn't work for everything.

    It reminds me of OOP. OOP was sold as the answer to every programming problem. Many OOP languages appeared. But, eventually, things died down and OOP became just another tool in the toolbox. D and C++ support OOP, too.

    I predict that over time the borrow checker will become just another tool in the toolbox, and it'll be used for algorithms and data structures where it makes sense, and other methods will be used where it doesn't.

    I've been around to see a lot of fashions in programming, which is most likely why D is a bit of a polyglot language :-/

    I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).

    The language can nail that down for you (D does). What's left are memory allocation errors. Garbage collection fixes that.

    • As discussed multiple times, I see automatic resouce management (written this way on purpose), coupled with effects/linear/affine/dependent types for lowlevel coding as the way to go.

      At least until we get AI driven systems good enough to generate straight binaries.

      Rust is to be celebrated for bringing affine types into mainstream, but it doesn't need to be the only way, productivity and performance can be made into the same language.

      The way Ada, D, Swift, Chapel, Linear Haskell, OCaml effects and modes, are being improved, already show the way forward.

      There there is the whole formal verification and dependent type languages, but that goes even beyond Rust, in what most mainstream developers are willing to learn, the development experience is still quite ruff.

    • So in D, is it now natural to mix borrow checking and garbage collection? I think some kind of "gradual memory management" is the holy grail, but like gradual typing, there are technical problems

      The issue is the boundary between the 2 styles/idioms -- e.g. between typed code and untyped code, you have either expensive runtime checks, or you have unsoundness

      ---

      So I wonder if these styles of D are more like separate languages for different programs? Or are they integrated somehow?

      Compared with GC, borrow checking affects every function signature

      Compared with manual memory management, GC also affects every function signature.

      IIRC the boundary between the standard library and programs was an issue -- i.e. does your stdlib use GC, and does your program use GC? There are 4 different combinations there

      The problem is that GC is a global algorithm, i.e. heap integrity is a global property of a program, not a local one.

      Likewise, type safety is a global property of a program

      ---

      (good discussion of what programs are good for the borrow checking style -- stateless straight-line code seems to benefit most -- https://news.ycombinator.com/item?id=34410187)

      5 replies →

    • I agree with you.

      For me Rust was amazing for writing things like concurrency code. But it slowed me down significantly in tasks I would do in, say, C# or even C++. It feels like the perfect language for game engines, compilers, low-level libraries... but I wasn't too happy writing more complex game code in it using Bevy.

      And you make a good point, it's the same for OOP, which is amazing for e.g. writing plugins but when shoehorned into things it's not good at, it also kills my joy.

    • > I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).

      #4 safer union/enum, I do hope D gets tagged-union/pattern-matching sometimes in the future, I know about std.sumtype, but that's nowhere close to what Rust offer

      2 replies →

    • > So I implemented a borrow checker for D...

      D's implementation of a borrow checker, is very intriguing, in terms of possibilities and putting it back into the context of a tool and not the "be all, end all".

      > I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).

      This speaks volumes from such an experienced and accomplished programmer.

    • Hey, thank you for spreading the joy of the borrow checker beyond Rust; awesome stuff, sounds very interesting, challenging, and useful!

      One question that came to mind as a single-track-Rust-mind kind of person: in D generally or in your experience specifically, when you find that the borrow checker doesn't work for a data structure, what is the alternative memory management strategy that you choose usually? Is it garbage collection, or manual memory management without a borrow checker?

      Cheers!

      2 replies →

    • > I can also say confidently that the #1 method to combat memory safety errors is array bounds checking. The #2 method is guaranteed initialization of variables. The #3 is stop doing pointer arithmetic (use arrays and ref's instead).

      I think these are generally considered table stake in a modern programming language? That's why people are/were excited by the borrow checker, as data races are the next prominent source of memory corruption, and one that is especially annoying to debug.

  • I saw a good talk, though I don't remember the name, that went over the array-index approach. It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.

    • > It correctly pointed out that by then, you're basically recreating your own pointers without any of the guarantees rust, or even C++ smart pointers, provide.

      I've gone back and forth on this, myself.

      I wrote a custom b-tree implementation in rust for a project I've been working on. I use my own implementation because I need it to be an order-statistic tree, and I need internal run length encoding. The original version of my b-tree works just like how you'd implement it in C. Each internal node / leaf is a raw allocations on the heap.

      Because leaves need to point back up the tree, there's unsafe everywhere, and a lot of raw pointers. I ended up with separate Cursor and CursorMut structs which held different kinds of references to the tree itself. Trying to avoid duplicating code for those two cursor types added a lot of complex types and trait magic. The implementation works, and its fast. But its horrible to work with, and it never passed MIRI's strict checks. Also, rust has really bad syntax for interacting with raw pointers.

      Recently I rewrote the b-tree to simply use a vec of internal nodes, and a vec of leaves. References became array indexes (integers). The resulting code is completely safe rust. Its significantly simpler to read and work with - there's way less abstraction going on. I think its about 40% less code. Benchmarks show its about 25% faster than the raw pointer version. (I don't know why - but I suspect the reason is due to better cache locality.)

      I think this is indeed peak rust.

      It doesn't feel like it, but using an array-index style still preserves many of rust's memory safety guarantees because all array lookups are bounds checked. What it doesn't protect you from is use-after-free bugs.

      Interestingly, I think this style would also be significantly more performant in GC languages like javascript and C#, because a single array-of-objects is much simpler for the garbage collector to keep track of than a graph of nodes & leaves which all reference one another. Food for thought!

      14 replies →

    • But Unity game objects are the same way: you allocate them when they spawn into the scene, and you deallocate them when they despawn. Accessing them after you destroyed them throws an exception. This is exactly the same as entity IDs! The GC doesn't buy you much, other than memory safety, which you can get in other ways (e.g. generational indices, like Bevy does).

      47 replies →

  • > > The lower levels are buggy and have a lot of churn > > The stack I use is Rend3/Egui/Winit/Wgpu/Vulkan

    The same is true if you try to make GUI applications in Rust. All the toolkits have lots of quirky bugs and broken features.

    The barrier to contributing to toolkits is usually also pretty high too: most of them focus on supporting a variety of open source and proprietary platforms. If you want to improve on something which requires some API change, you need to understand the details of all the other platforms — you can't just make a change for a single one.

    Ultimately, cross-platform toolkits always offer a lowest common denominator (or "the worst of all worlds"), so I think that this common focus in the Rust ecosystem of "make everything run everywhere" ends up being a burden for the ecosystem.

    > > Back-references are difficult > > A owns B, and B can find A, is a frequently needed pattern, and one that's hard to do in Rust. It can be done with Rc and Arc, but it's a bit unwieldy to set up and adds run-time overhead.

    When I code Rust, I'm always hesitant to use an Arc because it adds an overhead. But if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc. It's just implicit so we forget about it.

    We really need to be more liberal in our usage of Arc and stop seeing it as "it has overhead". Any higher level language has the same overhead, it's just not declared explicitly.

    • Arc is a very slow and primitive tool compared to a GC. If you are writing Arc everywhere, you would probably have better performance switching to a JVM language, C#, or Go.

      2 replies →

    • Objects are cheaper than Arc<T>. Otherwise using GC would suck a lot more than it does today (for certain types of data structures like trees accessed concurrently it is also a massive optimization).

      Python also has incomparably worse performance than Java or C#, both of which can do many object-based optimizations and optimize away their allocation.

    • The "if I then go and code in Python, Java or C#, pretty much all objects have the overhead of an Arc" is not accurate. Rust Arc involves atomic operation and its preformance can greatly degrade when the reference count is being mutated by many threads. See https://pkolaczk.github.io/server-slower-than-a-laptop/

      Java, C# and Go don't use atomic reference counting and don't have such overhead.

  • We've got another one on our end. It's much more to do with Bevy than Rust, though. And I wonder if we would have felt the same if we had chosen Fyrox.

    > Migration - Bevy is young and changes quickly.

    We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice. And the issues we had to deal with were runtime failures, not build time failures. It broke the large libraries we were using, like space_editor, until point releases and bug fixes could land. We ultimately decided to migrate to Three.js.

    > The team decided to invest in an experiment. I would pick three core features and see how difficult they would be to implement in Unity.

    This is exactly what we did! We feared a total migration, but we decided to see if we could implement the features in Javascript within three weeks. Turns out Three.js got us significantly farther than Bevy, much more rapidly.

    • > We were writing an animation system in Bevy and were hit by the painful upgrade cycle twice.

      I definitely sympathize with the frustration around the churn--I feel it too and regularly complain upstream--but I should mention that Bevy didn't really have anything production-quality for animation until I landed the animation graph in Bevy 0.15. So sticking with a compatible API wasn't really an option: if you don't have arbitrary blending between animations and opt-in additive blending then you can't really ship most 3D games.

      1 reply →

  • > These crates also get "refactored" every few months, with breaking API changes

    I am dealing with similar issues in npm now, as someone who is touching Node dev again. The number of deprecations drives me nuts. Seems like I’m on a treadmill of updating APIs just to have the same functionality as before.

    • I’ve found the key to the JS ecosystem is to be very picky about what dependencies you use. I’ve got a number of vanilla Bun projects that only depend on TypeScript (and that is only a dev dependency).

      It’s not always possible to be so minimal, but I view every dependency as lugging around a huge lurking liability, so the benefit it brings had better far outweigh that big liability.

      So far, I’ve only had one painful dependency upgrade in 5 years, and that was Tailwind 3-4. It wasn’t too painful, but it was painful enough to make me glad it’s not a regular occurrence.

      2 replies →

    • I’ve found such changes can actually be a draw at first. “Hey look, progress and activity!”. Doubly so as a primarily C++ dev frustrated with legacy choices in stl. But as you and others point out, living with these changes is a huge pain.

      1 reply →

  • One thing that struck me was the lavish praise heaped on the ECS of the game engine being migrated away from; this is extremely common.

    I think when it comes to game dev, people fixate on the engine having an ECS and maybe don't pay enough attention to the other aspects of it being good for gamedev, like... being a very high level language that lets you express all the game logic (C# with coroutines is great at this, and remains a core strength of Unity; Lua is great at this; Rust is ... a low level systems language, lol).

    People need to realise that having ECS architecture isn't the only thing you need to build games effectively. It's a nice way to work with your data but it's not the be-all and end-all.

  • And some critical rust issues for games are not dealt with: on tiny glade with the devs did hit a libgcc issue on the native elf/linux build, and we did discovered that the rust toolchain for elf/linux targets does not support the static linking of libgcc (which is mandatory for games, any closed source binary). The issue is opened on rust github since 2015...

    But the real issue is the game devs do not know the gnu toolchain (and llvm based) does default to open source software building for elf/linux targets, and that there is more work, ABIs related, to do for game binaries on those platforms.

  • Not a game dev, but based on what I do know of it, some of this sounds to me like it's just a severe mismatch between Rust's memory model and the needs of games.

    Individually managing the lifetime of every single item you allocate on the heap and fine-grained tracking of ownership of everything on both the heap and the stack makes a lot of sense to me for more typical "line of business" tools that have kind of random and unpredictable workloads that may or may not involve generating arbitrarily complex reference graphs.

    But everything I've seen & read of best practices for game development, going all the way back to when I kept a heavily dogeared copy of Michael Abrash's Black Book close at hand while I made games for fun back in the days when you basically had to write your own 3D engine, tells me that's not what a game engine wants. What a game engine wants, if anything, is something more like an arena allocator. Because fine-grained per-item lifetime management is not where you want to be spending your innovation tokens when the reality is that you're juggling 500 megabyte lumps of data that all have functionally the same lifetime.

  • Great write-up. I do the array indexing, and get runtime errors by misindexing these more often than I'd like to admit!

    I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.

    • I've always thought about this. In my mind there are two ways a language can guarantee memory safety:

      * Simply check all array accesses and pointer de references and panic if we are out of bounds and panic/throw an exception/etc. if we are doing something wrong.

      * Guarantee at compile-time that we are always accessing valid memory, to prevent even those panics.

      Rust makes a lot of effort to reach the second goal, but, since it gives you integers and arrays, it makes the problem fundamentally insoluble.

      The memory it wants so hard to regulate access to is just an array, and a pointer is just an index.

      1 reply →

    • > I also hear you on the winit/wgpu/egui breaking changes. I appreciate that the ecosystem is evolving, but keeping up is a pain. Especially when making them work together across versions.

      Yes.

      Three months ago, when the Rust graphics stack achieved sync, I wrote a congratulatory note.[1]

          Everybody is in sync!
      
              wgpu 24
              egui 0.31
              winit 0.30
      
          all play well together using the crates.io versions. No patch overrides! Thanks, everybody.
      

      Wgpu 25 is now out, but the others are not in sync yet. Maybe this summer.

      [1] https://www.reddit.com/r/rust_gamedev/comments/1iiu3mr/every...

  • > These crates also get "refactored" every few months, with breaking API changes, which breaks the stack for months at a time until everyone gets back in sync.

    This was a problem with early versions of Scala as well, exacerbated by the language and core libs shifting all the time. It got so difficult to keep things up to date with all the cross compatibility issues that the services written in it ended up stuck on archaic versions of old libraries. It was a hard lesson in if you're doing a non-hobby project, avoid languages and communities that behave like this until they've finally stabilized.

  • This is probably brought up whenever an article mentions “wasted time”, but I wonder what percentage of side and “main” software projects “fail”- so we have to define side vs main and what it means to fail (I would imagine failure looks different for each), but anecdotally, none of my side projects have made money, but at least one I would call “done”, so… success?

    A fear I have with larger side projects is the notion that it could all be for nought, though I suppose that’s easily-mitigated by simply keeping side projects small, iterative if necessary. Start with an appropriate-sized MVP, et al.

  • > Nobody has really pushed the performance issues.

    This is clearly false. The Bevy performance improvements that I and the rest of the team landed in 0.16 speak for themselves [1]: 3x faster rendering on our test scenes and excellent performance compared to other popular engines. It may be true that little work is being done on rend3, but please don't claim that there isn't work being done in other parts of the ecosystem.

    [1]: https://bevyengine.org/news/bevy-0-16/

    • I read the original post as saying that no one has pushed the engine to the extent a completed AAA game would in order to uncover performance issues, not that performance is bad or that Bevy devs haven’t worked hard on it.

  • Why is this sad? He's realized that the best language is C# and the best platform for games is Unity! This is progress, and that's good.

  • A owns B, and B can find A

    I think you should think less like Java/C# and more like database.

    If you have a Comment object that has parent object, you need to store the parent as a 'reference', because you can't put the entire parent.

    So I'll probably use Box here to refer to the parent

    • ?? the whole point of Box<T> is to be an owning reference, you can’t have multiple children refer to the same parent object if you use a Box

    • If you use Box to refer to parent, then parent cannot own the child (unless using things like Arc<Mutex<>>).

  • > Someone else set out to do something similar in C#/Unity and had something going in less than two years.

    But in that case doesn't the garbage collector ruin the experience for the user? Because that's the argument I always hear in favor of Rust.

    • For a while now Unity has an incremental garbage collector where you pay a small amount of time per frame instead of introducing large pauses every time the GC kicks in.

      Even without the incremental GC it's manageable and it's just part of optimising the game. It depends on the game but you can often get down to 0 allocations per frame by making using of pooling and no alloc APIs in the engine.

      You also have the tools to pause GC so if you're down to a low amount of allocation you can just disable the GC during latency sensitive gameplay and re-enable and collect on loading/pause or other blocking screens.

      Obviously its more work than not having to deal with these issues but for game developers its probably a more familiar topic than working with the borrow checker and critically allows for quicker iteration and prototyping.

      Finding the fun and time to market are top priority for games development.

      4 replies →

More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.

That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")

  • > More than anything else, this sounds like a good lesson in why commercial game engines have taken over most of game dev. There are so many things you have to do to make a game, but they're mostly quite common and have lots of off-the-shelf solutions.

    > That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")

    But using Bevy isn't writing your own game engine. Bevy is 400k lines of code that does quite a lot. Using Bevy right now is more like taking a game engine and filling in some missing bits. While this is significantly more effort than using Unity, it's an order of magnitude less work than writing your own game engine from scratch.

  • > That is, any sufficiently mature indie game project will end up implementing an informally specified, ad hoc, bug-ridden implementation of Unity (... or just use the informally specified, ad hoc and bug-ridden game engine called "Unity")

    For the 4 people on HN not aware of it, this is a riff on Greenspun's tenth rule:

    > Any sufficiently complicated C or Fortran program contains an ad hoc, informally-specified, bug-ridden, slow implementation of half of Common Lisp.

  • I think this has less to do with Rust and commercial game engines being better and more of a fetish that game programmers seem to have for entity component systems. One does not have to look far to see similar projects repeated in C++ years prior.

    • ECS is basically the realization that relational databases are a pretty damn good model.

      I’m suspicious though that you could probably get away with literally just using like an in-memory duckdb to store your game state and get most of the performance/modeling value while also getting a more powerful/robust query engine — especially for like turn-based games. I’m also not sure that bevy’s encoding of queries into the type system is all that sane — as opposed to something like query building with LINQ, but I think it’s how they get to resolve the system dependency graph for parallelization

      1 reply →

  • And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.

    • If anything, making your own game engine makes process more frustrating, time consuming and leads to burnout quicker than ever, especially when your initial goal was just to make a game but instead you stuck figuring out your own render pipeline or inventing some other wheel. I have a headache just from thinking that at some point in engine development person would have to spend literal weeks figuring out export to Android with proper signage and all, when, again, all they wanted is to just make a game.

      2 replies →

    • I would bet that if you want to build a game engine and not the game, the game itself is probably not that compelling. Could still break out, like Minecraft, but if someone has an amazing game idea I would think they would want to ship it as fast as possible.

      2 replies →

    • Making an actual indie game can take from 6 months (tiny) to 4-5years. If you multiply that by 10x, the upper bound would be 40-50 years. Of course, that's not how it would be but one has to consider whether their goal is to build a game engine OR a game, doing both at the same is almost guaranteed failure (statistically speaking).

    • > And yet, if making your own game engine makes it intellectually stimulating enough to actually make and ship a game, usually for near free, going 10x slower is still better than going at a speed of zero.

      Generally, I've seen the exact opposite. People who code their own engines tend to get sucked into the engine and forget that they're supposed to be shipping a game. (I say this as someone who has coded their own engine, multiple times, and ended up not shipping a game--though I had a lot of fun working on the engine.)

      The problem is that the fun, cool parts about building your own game engine are vastly outnumbered by the boring parts: supporting level and save data loading/storage, content pipelines, supporting multiple input devices and things like someone plugging in an XBox controller while the game is running and switching all the input symbols to the new input device in real time, supporting various display resolutions and supporting people plugging in new displays while the game is running, and writing something that works on PC/mobile/Switch(2)/XBox/Playstation... all solved problems, none of which are particularly intellectually stimulating to solve correctly.

      If someone's finances depend on shipping a game that makes money, there's really no question that you should use Unity or Unreal. Maybe Godot but even that's a stretch. There's a small handful of indie custom game engine success stories, including some of my favorites like The Witness and Axiom Verge, but those are exceptions rather than the rule. And Axiom Verge notably had to be deeply reworked to get a Switch release, because it's built on MonoGame.

      5 replies →

    • Being intellectually stimulating doesn't translate into sales, gameplay might.

    • My experience is the opposite. Plenty of intellectual stimulation comes from actually making the game. Designing and refining gameplay mechanics, level design, writing shaders, etc.

      What really drags you down in games is iteration speed. It can be fun making your own game engine at first but after awhile you just want the damn thing to work so you can try out new ideas.

I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year. When reasonable, nowadays I always use Rust instead of C++.

But for the vast majority of projects, I believe that C++ is not the right language, meaning that Rust isn't, either.

I feel like many people choose Rust because is sounds like it's more efficient, a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not) or for C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project).

It's a bit like choosing Gentoo "because it's faster" (or worse, because it "sounds cool"). If that's the only reason, it's probably a bad choice (disclaimer: I use and love Gentoo).

  • I have a personal-use app that has a hot loop that (after extensive optimization) runs for about a minute on a low-powered VPS to compute a result. I started in Java and then optimized the heck out of it with the JVM's (and IntelliJ's) excellent profiling tools. It took one day to eliminate all excess allocations. When I was confident I couldn't optimize the algorithm any further on the JVM I realized that what I'd boiled it down to looked an awful lot like Rust code, so I thought why not, let's rewrite it in Rust. I took another day to rewrite it all.

    The result was not statistically different in performance than my Java implementation. Each took the same amount of time to complete. This surprised me, so I made triply sure that I was using the right optimization settings.

    Lesson learned: Java is easy to get started with out of the box, memory safe, battle tested, and the powerful JIT means that if warmup times are a negligible factor in your usage patterns your Java code can later be optimized to be equivalent in performance to a Rust implementation.

    • I wrote a few benchmarks a few years ago comparing JS vs C++ compiled to WASM vs C++ compiled to x64 with -O3.

      I was surprised that the heaviest one (a lot of float math) run about the same speed in JS vs C++ -> x64. The code was several nested for loops manipulating a buffer and using only local-scoped variables and built-in Math library functions (like sqrt) with no JS objects/arrays besides the buffer. So the code of both implementations was actually very similar.

      The C++ -> WASM version of that one benchmark was actually significantly slower than both the JS and C++ -> x64 version (again, a few years ago, I imagine it got better now).

      Most compilers are really good at optimizing code if you don't use the weird "productivity features" of your higher level languages. The main difference of using lower level languages is that not being allowed to use those productivity features prevents you from accidentally tanking performance without noticing.

      I still hope to see the day where a language could have multiple "running modes" where you can make an individual module/function compile with a different feature-set for guaranteeing higher performance. The closest thing we have to this today is Zig using custom allocators (where opting out of receiving an allocator means no heap allocations are guaranteed for the rest of the stack call) and @setRuntimeSafety(false) which disables runtime safety checks (when using ReleseSafe compilation target) for a single scope.

      2 replies →

    • >I realized that what I'd boiled it down to looked an awful lot like Rust code

      you're no longer writing idiomatic java at this point - probably with zero object oriented programming. so might as well write it in Rust from the get-go.

      4 replies →

  • I write a lot of Rust, but as you say, it's basically a vastly improved version of C++. C++ is not always the right move!

    For all my personal projects, I use a mix of Haskell and Rust, which I find covers 99% of the product domains I work in.

    Ultra-low level (FPGA gateware): Haskell. The Clash compiler backend lets you compile (non-recursive) Haskell code directly to FPGA. I use this for audio codecs, IO expanders, and other gateware stuff.

    Very low-level (MMUless microcontroller hard-realtime) to medium-level (graphics code, audio code): Rust dominates here

    High-level (have an MMU, OS, and desktop levels of RAM, not sensitive to ~0.1ms GC pauses): Haskell becomes a lot easier to productively crank out "business logic" without worrying about memory management. If you need to specify high-level logic, implement a web server, etc. it's more productive than Rust for that type of thing.

    Both languages have a lot of conceptual overlap (ADTs, constrained parametric types, etc.), so being familiar with one provides some degree of cross-training for the other.

    • What do you mean by 'a mix of Haskell and Rust'? Is that a per-project choice or do you use both in a single project? I'm interested in the latter. If so, could you point me to an example?

      Another question is about Clash. Your description sounds like the HLS (high level synthesis) approach. But I thought that Clash used a Haskell -based DSL, making it a true HDL. Could you clarify this? Thanks!

      1 reply →

  • > "I really like Rust as a replacement for C++, especially given that C++ seems to become crazier every year."

    I don't understand this argument, which I've also seen it used against C#, quite frequently. When a language offers new features, you're not forced to use them. You generally don't even need to learn them if you don't want. I do think some restrictions in languages can be highly beneficial, like strong typing, but the difference is that in a weakly typed language that 'feature' is forced upon you, whereas random new feature in C++ or C# is near to always backwards compatible and opt-in only.

    For instance, to take a dated example - consider move semantics in C++. If you never used it anywhere at all, you'd have 0 problems. But once you do, you get lots of neat things for free. And for these sort of features, I see no reason to ever oppose their endless introduction unless such starts to imperil the integrity/performance of the compiler, but that clearly is not happening.

    • You can't avoid a lot of this stuff, once libraries start using it or colleagues add it to your codebase then you need to know it. I'd argue you need to know it well before you decide to exclude it.

    • Then better be quite picky of what libraries one choses, because that is the thing, while we may not use them, the libraries migth impose them on us.

      Same applies having to deal with old features, replaced by modern ways, old codebases don't get magically rewritten, and someone has to understand modern and old ways.

      Likewise I am not a big fan of C and Go, as visible by my comment history, yet I know them well enough, because in theory I am not forced to use them, in practice, there are business contexts where I do have to use them.

    • My experience with C++ is that it fundamentally "looks worse" and has worse tooling than more modern languages. And it feels like they keep adding new features that make it all even worse every year.

      Sure, you don't have to use them, but you have to understand them when used in libraries you depend on. And in my experience in an environment of C++ developers, many times you end up having some colleagues who are very vocal about how you should love the language and use all the new features. Not that this wouldn't happen in Java or Kotlin, but the fact is that new features in those languages actually improve the experience with the language.

      1 reply →

  • >> a bit as if people went for C++ instead of a JVM language "because the JVM is slow" (spoiler: it is not)

    The OP is doing game development. It’s possible to write a performant game in Java but you end up fighting the garbage collector the whole way and can’t use much library code because it’s just not written for predictable performance.

    • I didn't mean that the OP should use Java. BTW the OP does not use C++, but Rust.

      This said, they moved to Unity, which is C#, which is garbage collected, right?

      7 replies →

  • VPS/Cloud providers skimp on RAM. The JVM sucks for any low RAM workload, where you want the smallest possible single server instance. The startup times of JVM based applications are also horrendous. How many gigabytes of RAM does Digital Ocean give you with your smallest instance? They don't. They give you 512MiB. Suddenly using Java is no longer an option, because you will be wasting your day carefully tuning literally everything to fit in that amount.

    • You can get decent startup times if you have fewer dependencies. The JVM itself starts fairly quickly (<200 ms), the problem is all the class loading. If your "app" is a bloated multi gigabyte monstrosity... good luck!

  • I think the choice of C++ vs JVM depends on your project. If you're not using the benefits of "unsafe" languages then it probably doesn't matter.

    But if you are after performance how do do the following in Java? - Build an AOS so that memory access is linear re cache. Prefetch. Use things like _mm_stream_ps() to tell the CPU the cache line you're writing to doesn't need to be fetched. Share a buffer of memory between processes by atomic incrementing a head pointer.

    I'm pretty sure you could build an indie game without low-level C++, but there is a reason that commercial gamedev is typically C++.

    • While there are many technical reasons to use C++ over Java in game development, many commercial games could be easily done in Java, as they are A or AA level at most.

      Had Notch thought too much about which language to use, maybe he would still be trying to launch a game today.

      4 replies →

    • > but there is a reason that commercial gamedev is typically C++.

      Sure, and that's kind of my point. There are a few use-cases where C++ is actually needed, and for those cases, Rust (the language) is a good alternative if it's possible to use it.

      But even for gamedev, the article here says that they moved to Unity. The core of Unity is apparently C++, but users of Unity code in C#. Which kind of proves my point: outside of that core that actually needs C++, it doesn't matter much. And the vast majority of software development is done outside of those core use-cases, meaning that the vast majority of developers do not need Rust.

      1 reply →

  • Rust is very easy when you want to do easy things. You can actually just completely avoid the borrow-checker altogether if you want to. Just .clone(), or Arc/Mutex. It's what all the other languages (like Go or Java) are doing anyway.

    But if you want to do a difficult and complicated thing, then Rust is going to raise the guard rails. Your program won't even compile if it's unsafe. It won't let you make a buggy app. So now you need to back up and decide if you want it to be easy, or you want it to be correct.

    Yes, Rust is hard. But it doesn't have to be if you don't want.

    • This argument goes only so far. Would you consider querying a database hard? Most developers would say no. But it’s actually a pretty hard problem, if you want to do it safely. In rust, that difficultly leaks into the crates. I have a project that uses diesel and to make even a single composable query is a tangle of uppercase Type soup.

      This just isn’t a problem in other languages I’ve used, which granted aren’t as safe.

      I love Rust. But saying it’s only hard if you are doing hard things is an oversimplification.

      8 replies →

    • If you use Rust with `.clone()` and Arc/Mutex, why not just using one of the myriad of other modern and memory safe languages like Go, Scala/Kotlin/Java, C#, Swift?

      The whole point of Rust is to bring memory safety with zero cost abstraction. It's essentially bringing memory safety to the use-cases that require C/C++. If you don't require that, then a whole world of modern languages becomes available :-).

      6 replies →

  • This couldn't be any more accurate even if you compiled with CFLAGS='-march native ' and RUSTFLAGS='-C can't remember insert here'

  • > C instead of C++ because "it's faster" (spoiler: it probably doesn't matter for your project)

    If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.

    • > C++ has been faster than C for a long time.

      What is your basis for this claim? C and C++ are both built on essentially the same memory and execution model. There is a significant set of programs that are valid C and C++ both -- surely you're not suggesting that merely compiling them as C++ will make them faster?

      There's basically no performance technique available in C++ that is not also available in C. I don't think it's meaningful to call one faster than the other.

      30 replies →

    • > If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time.

      In certain cases, sure - inlining potential is far greater in C++ than in C.

      For idiomatic C++ code that doesn't do any special inlining, probably not.

      IOW, you can rework fairly readable C++ code to be much faster by making an unreadable mess of it. You can do that for any language (C included).

      But what we are usually talking about when comparing runtime performance in production code is the idiomatic code, because that's how we wrote it. We didn't write our code to resemble the programs from the language benchmark game.

    • I doubt that because C++ encourages heavy use of dynamic memory allocations and data structures with external nodes. C encourages intrusive data structures, which eliminates many of the dynamic memory allocations done in C++. You can do intrusive data structures in C++ too, but it clashes with object oriented idea of encapsulation, since an intrusive data structure touches fields of the objects inside it. I have never heard of someone modifying a class definition just to add objects of that class to a linked list for example, yet that is what is needed if you want to use intrusive data structures.

      While I do not doubt some C++ code uses intrusive data structures, I doubt very much of it does. Meanwhile, C code using <sys/queue.h> uses intrusive lists as if they were second nature. C code using <sys/tree.h> from libbsd uses intrusive trees as if they were second nature. There is also the intrusive AVL trees from libuutil on systems that use ZFS and there are plenty of other options for such trees, as they are the default way of doing things in C. In any case, you see these intrusive data structures used all over C code and every time one is used, it is a performance win over the idiomatic C++ way of doing things, since it skips an allocation that C++ would otherwise do.

      The use of intrusive data structures also can speed up operations on data structures in ways that are simply not possible with idiomatic C++. If you place the node and key in the same cache line, you can get two memory fetches for the price of one when sorting and searching. You might even see decent performance even if they are not in the same cache line, since the hardware prefetcher can predict the second memory access when the key and node are in the same object, while the extra memory access to access a key in a C++ STL data structure is unpredictable because it goes to an entirely different place in memory.

      You could say if you have the C++ STL allocate the objects, you can avoid this, but you can only do that for 1 data structure. If you want the object to be in multiple data structures (which is extremely common in C code that I have seen), you are back to inefficient search/traversal. Your object lifetime also becomes tied to that data structure, so you must be certain in advance that you will never want to use it outside of that data structure or else you must do at a minimum, another memory allocation and some copies, that are completely unnecessary in C.

      Exception handling in C++ also can silently kill performance if you have many exceptions thrown and the code handles it without saying a thing. By not having exception handling, C code avoids this pitfall.

      3 replies →

    • > If your C is faster than your C++ then something has gone horribly wrong. C++ has been faster than C for a long time. C++ is about as fast as it gets for a systems language.

      That's interesting, did ChatGPT tell you this?

  • I agree with you except for the JVM bit - but everyone's application varies

    • My point is that there are situations where C++ (or Rust) is required because the JVM wouldn't work, but those are niche.

      In my experience, most people who don't want a JVM language "because it is slow" tend to take this as a principle, and when you ask why their first answer is "because it's interpreted". I would say they are stuck in the 90s, but probably they just don't know and repeat something they have heard.

      Similar to someone who would say "I use Gentoo because Ubuntu sucks: it is super slow". I have many reasons to like Gentoo better than Ubuntu as my main distro, but speed isn't one in almost all cases.

      2 replies →

  • Rust is actually quite suitable for a number of domains where it was never intended to excel.

    Writing web service backends is one domain where Rust absolutely kicks ass. I would choose Rust/(Actix or Axum) over Go or Flask any day. The database story is a little rough around the edges, but it's getting better and SQLx is good enough for me.

    edit: The downvoters are missing out.

    • To me, web dev really sounds like the one place where everything works and it's more a question of what is in fashion. Java, Ruby, Python, PHP, C, C++, Go, Rust, Scala, Kotlin, probably even Swift? And of course NodeJS was made for that, right?

      I am absolutely convinced I can find success story of web backends built with all those languages.

      29 replies →

The fact that people love the language is an unexpected downside. In my experience the rust ecosystem has an insanely high churn rate. Crates are often abandoned seemingly for no reason, often before even hitting 1.0. My theory is this is because people want to use rust primarily, the domain problem is just a challenge, like a level in a game. Once all the fun parts are solved, they leave it for dead.

Conversely and ironically, this is why I love Go. The language itself is so boring and often ugly, but it just gets out of the way and has the best in class tooling. The worst part is having seen the promised land of eg Rust enums, and not having them in other langs.

  • This.

    Feeling passionate about a programming language is generally bad for the products made with that language.

  • > Crates are often abandoned seemingly for no reason, often before even hitting 1.0.

    That is the one of the first things my colleagues told me after trying Rust for a few weeks: a laaaarge number of crates under 1.0, and so many abandoned crates, still published in crates.io. Some of those have even reported CVEs due to heavy `unsafe` usage for... nothing.

    I love Rust, but I have the feeling that the language (and its community) lost the point since the release of the 2018 edition.

  • I find it interesting how the software industry has done everything it can to ignore F#. This is me just lamenting how I always come back to it as the best general purpose language.

    • Probably the intersection of people who (a) want an advanced ML-style language and (b) are interested in a CLR-based language is very small. But also, doesn't it do some weird thing where it matters in what order the files are included in the compilation? I remember being interested in F# but being turned off by that, and maybe some other weird details.

      2 replies →

    • I don’t want to use a language with unknown ecosystem. If I need a library to do X, I’m confident I can find it for Go, Java, Python etc. But I don’t know about F#.

      I also don’t want to use a language with questionable hireability.

      2 replies →

    • To within a rounding error of zero, I don't think anyone outside Windows devs truly expects Microsoft to maintain .NET on other platforms, so it's not really an option in many (most?) fields.

      They've effectively dropped it a couple times in the past, and while they're currently putting effort in, the company as a whole does not seem to care about stuff like this beyond brief bursts of attention to try to win back developer mindshare, before going back to abandonment. It's what Microsoft is rather well known for.

      8 replies →

    • Huh? Usually languages that are ”ignored” turns out to be for reasons such as poor or proprietary tooling. As an ignorant bystander, how are things like

      Cross compilation, package manager and associated infrastructure, async io (epoll, io_uring etc), platform support, runtime requirements, FFI support, language server, etc.

      Are a majority of these things available with first party (or best in class) integrated tooling that are trivial to set up on all big three desktop platforms?

      For instance, can I compile an F# lib to an iOS framework, ideally with automatically generated bindings for C, C++ or Objective C? Can I use private repo (ie github) urls with automatic overrides while pulling deps?

      Generally, the answer to these questions for – let’s call it ”niche” asterisk – languages, are ”there is a GitHub project with 15 stars last updated 3 years ago that maybe solves that problem”.

      There are tons of amazing languages (or at the very least, underappreciated language features) that didn’t ”make it” because of these boring reasons.

      My entire point is that the older and grumpier I get, the less the language itself matters. Sure, I hate it when my favorite elegant feature is missing, but at the end of the day it’s easy to work around. IMO the navel gazing and bikeshedding around languages is vastly overhyped in software engineering.

      4 replies →

  • > My theory is this is because people want to use rust primarily, the domain problem is just a challenge, like a level in a game.

    So you mean, Rust is more of an intellectual playground, than an actual workbench? I'm curious how high the churn rate of packages in other languages is, like python or ruby (let's not talk about javascript). Could this be the result of rust being still rather young and moving fast?

    > Conversely and ironically, this is why I love Go.

    Is Go still forcing hard wired paths in $HOME for compiling, or what was it again?

    • > So you mean, Rust is more of an intellectual playground, than an actual workbench?

      Both. IMO rust shines as a C++ replacement, ie low level high performance. But officially its general purpose and nobody will admit that it’s a bad tool for high level jobs, it’s awful for prototyping. I say this as someone who loves many aspects of rust.

      > Could this be the result of rust being still rather young and moving fast?

      My hunch says it’s something different. Look at the people and their motivations. I’ve never seen such a distinct fan club in programming before. It comes with a lot of passion and extreme talent, but I don’t think it’s a coincidence that governance has been a shitshow and that there are 4 different crates solving the same problem where maintainers couldn’t agree on something minor. It makes sense, if aesthetics is a big factor.

      > Is Go still forcing hard wired paths in $HOME for compiling, or what was it again?

      Nothing I’ve noticed. Are you talking about GOPATH hell, from back in the day?

  • Can you speak more of this best in class tooling?

    • The official `go` command does dep management, (cross) compilation, testing (including benchmarks and coverage reports), race detection, profiling reports, code generation (metaprogramming alternative), doc generation etc. Build times are insanely fast too.

      The only tooling I use personally outside of the main CLI is building iOS/Android static libraries (gomobile). It’s still first party, but not in the go command.

      3 replies →

I think this is a problem of using the right abstractions.

Rust gamedev is the Wild West, and frontier development incurs the frontier tax. You have to put a lot of work into making an abstraction, even before you know if it’s the right fit.

Other “platforms” have the benefit of decades more work sunk into finding and maintaining the right abstractions. Add to that the fact that Rust is an ML in sheep’s clothing, and that games and UI in FP has never been a solved problem (or had much investment even), it’s no wonder Rust isn’t ready. We haven’t even agreed on the best solutions to many of these problems in FP, let alone Rust specifically!

Anyway, long story short, it takes a very special person to work on that frontier, and shipping isn’t their main concern.

I love Rust, but this lines up with my experience roughly. Especially the rapid iteration. Tried things out with Bevy, but I went back to Godot.

There are so many QoL things which would make Rust better for gamedev without revamping the language. Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev. But that's a really hard sell (and might be harder to implement than I imagine.)

  • I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!

    GHC has an -fdefer-type-errors option that lets you compile and run this code:

        a :: Int
        a = 'a'
        main = print "b"
    
    

    Which obviously doesn't typecheck since 'a' is not an Int, but will run just fine since the value of `a` is not observed by this program. (If it were observed, -fdefer-type-errors guarantees that you get a runtime panic when it happens.) This basically gives you the no-types Python experience when iterating, then you clean it all up when you're done.

    This would be even better in cases where it can be automatically fixed. Just like how `cargo clippy --fix` will automatically fix lint errors whenever it can, there's no reason it couldn't also add explicit coercions of numeric types for you.

    • > I wish more languages would lean into having a really permissive compiler that emits a lot of warnings. I have CI so I'm never going to actually merge anything that makes warnings. But when testing, just let me do whatever I want!

      I’d go even further and say I wish my whole development stack had a switch I can use to say “I’m not done iterating on this idea yet, cool it with the warnings.”

      Unused imports, I’m looking at you… stop bitching that I’m not using this import line simply because I commented out the line that uses it in order to test something.

      Stop complaining about dead code just because I haven’t finished wiring it up yet, I just want to unit test it before I go that far.

      Stop complaining about unreachable code because I put a quick early return line in this function so that I could mock it to chase down this other bug. I’ll get around to fixing it later, I’m trying to think!

      In rust I can go to lib.rs somewhere and #![allow(unused_imports,dead_code,etc)] and then remember to drop it by the time I get the branch ready for review, but that’s more cumbersome than it ought to be. My whole IDE/build/other tooling should have a universal understanding of “this is a work in progress please let me express my thoughts with minimal obstructions” mode.

      2 replies →

    • Yeah this is my absolute dream language. Something that lets you prototype as easily as Python but then compile as efficiently and safely as Rust. I thought Rust might actually fit the bill here and it is quite good but it's still far from easy to prototype in - lots of sharp edges with say modifying arrays while iterating, complex types, concurrency. Maybe Rust can be something like this with enough unsafe but I haven't tried. I've also been meaning to try more Typescript for this kind of thing.

      7 replies →

  • Yeh, I've been tinkering around a year with a Bevy-competitor, Amethyst until that project shut down. By now, I just don't think Rust is good for client-side or desktop game development.

    In my book, Rust is good at moving runtime-risk to compile-time pain and effort. For the space of C-Code running nuclear reactors, robots and missiles, that's a good tradeoff.

    For the space of making an enemy move the other direction of the player in 80% of the cases, except for that story choice, and also inverted and spawning impossible enemies a dozen times if you killed that cute enemy over yonder, and.... and the worst case is a crash of a game and a revert to a save at level start.... less so.

    And these are very regular requirements in a game, tbh.

    And a lot of _very_silly_physics_exploits_ are safely typed float interactions going entirely nuts, btw. Type safety doesn't help there.

  • > Just a mode to automatically coerce between numeric types would make Rust so much more ergonomic for gamedev.

    C# is stricter about float vs. double for literals than Rust is, and the default in C# (double) is the opposite of the one you want for gamedev. That hasn't stopped Unity from gaining enormous market share. I don't think this is remotely near the top issue.

    • I have written a lot of C# and I would very much not want to use it for gamedev either. I can only speak for my own personal preference.

  • I used to hate the language but statically typed GDscript feels like the perfect weight for indie development

  • What numeric types typically need conversions?

    • What I mean is, I want to be able to use i32/i64/u32/u64/f32/f64s interchangeably, including (and especially!) in libraries I don't own.

      I'm usually working with positive values, and almost always with values within the range of integers f32 can safely represent (+- 16777216.0).

      I want to be able to write `draw(x, y)` instead of `draw(x as u32, y as u32)`. I want to write "3" instead of "3.0". I want to stop writing "as".

      It sounds silly, but it's enough to kill that gamedev flow loop. I'd love if the Rust compiler could (optionally) do that work for me.

      3 replies →

One of the smartest devs I know built his game from scratch in C. Pretty complex game too - 3D open-world management game. It's now successful on steam.

Thing is, he didn't make the game in C. He built his game engine in C, and the game itself in Lua. The game engine is specific to this game, but there's a very clear separation where the engine ends and the game starts. This has also enabled amazing modding capabilities, since mods can do everything the game itself can do. Yes they need to use an embedded scripting language, but the whole game is built with that embedded scripting language so it has APIs to do anything you need.

For those who are curious - the game is 'Sapiens' on Steam: https://store.steampowered.com/app/1060230/Sapiens/

  • I agree that the game is amazing from a technical point of view, but look at the reviews and the pace of development. The updates are sparse and slow, and if there's an update, it's barely an improvement. This is one the of disadvantages of creating a game engine from scratch: more time is spent on the engine than the game itself, which may or may not be bad depending on which perspective you look at it from.

  • Do you know why he supports MacOS, but not Linux?

    • Most likely because they don't use Linux. Or because it's kind of a mine field to support with bugs that occur on different distros. Even Unity has their own struggles with Linux support.

      They're distributing their game on Steam too so Linux support is next to free via Proton.

      3 replies →

    • It probably supports Linux via proton. Done. Official valve recommendation a few years ago not sure if still active.

I did the same for my project and moved to Go from Rust. My iteration is much faster, but the code a bit more brittle, esp. for concurrency. Tests have become more important.

Still, given the nature of what my project is (APIs and basic financial stuff), I think it was the right choice. I still plan to write about 5% of the project in Rust and call it from Go, if required, as there is a piece of code that simply cannot be fast enough, but I estimate for 95% of the project Go will be more than fast enough.

  • > but the code a bit more brittle, esp. for concurrency

    Obligatory ”remember to `go run -race`”, that thing is a life saver. I never run into difficult data races or deadlocks and I’m regularly doing things like starting multiple threads to race with cancelation signals, extending timeouts etc. It’s by far my favorite concurrency model.

    • Yep, I do use that, but after getting used to Rust's Send/Sync traits it feels wild and crazy there are no guardrails now on memory access between threads. More a feel thing than reality, but I just find I need to be a bit more careful.

  • Is calling Rust from Go fast? Last time I checked the interface between C and Go is very slow

    • No, it is not all that fast after the CGo call marshaling (Rust would need to compile to the C ABI). I would essentially call in to Rust to start the code, run it in its own thread pool and then call into Rust again to stop it. The time to start and stop don't really matter as this is code that runs from minutes to hours and is embarrassingly parallel.

    • I have no experience with FFI between C and Go, could anyone shed some light on this? They are both natively compiled languages – why would calls between them be much slower than any old function call?

      3 replies →

  • > I still plan to write about 5% of the project in Rust and call it from Go, if required

    And chances are that it won't be required.

This seems like the right call. When it comes to projects like these, efficiency is almost everything. Speaking about my own experiences, when I hit a snag in productivity in a project like this, it's almost always a death-knell.

I too have a hobby-level interest in Rust, but doing things in Rust is, in my experience, almost always just harder. I mean no slight to the language, but this has universally been my experience.

  • The advantages of correctness, memory safety, and a rich type system are worth something, but I expect it's a lot less when you're up against the value of a whole game design ecosystem with tools, assets, modules, examples, documentation, and ChatGPT right there to tell you how it all fits together.

    Perhaps someday there will be a comparable game engine written in Rust, but it would probably take a major commercial sponsor to make it happen.

    • One of the challenges I never quite got over completely, was that I was always fighting rust fundamentals, which tells me I never fully assimilated into thinking like a rustacean.

      This was more of a me-problem, but I was constantly having to change my strategy to avoid fighting the borrow-checker, manage references, etc. In any case, it was a productivity sink.

      4 replies →

  • It is a question of tradeoffs. Indie studios should be happy to trade off some performance in exchange for more developer productivity (since performance is usually good enough anyway in an indie game, which usually don't have millions of entities, meanwhile developer productivity is a common failure point).

I love Bevy, but Unity is a weapon when it comes to quickly iterating and making a game. I think the Bevy developers understand that they have a long way to go before they get there. The benefits of Bevy (code-first, Rust, open source) still make me prefer it over Unity, but Unity is ridiculously batteries-included.

Many of the negatives in the post are positives to me.

> Each update brought with it incredible features, but also a substantial amount of API thrash.

This is highly annoying, no doubt, but the API now is just so much better than it used to be. Keeping backwards compatibility is valuable once a product is mature, but like how you need to be able to iterate on your game, game engine developers need to be able to iterate on their engine. I admit that this is a debuff to the experience of using Bevy, but it also means that the API can actually get better (unlike Unity which is filled with historical baggage, like the Text component).

Not a game dev, but thought I'd mess around with Bevy and Rust to learn a bit more about both. I was surprised that my code crashed at runtime due to basics I expected the type system to catch. The fancy ECS system may be great for AAA games, but it breaks the basic connections between data and use that type systems rely on. I felt that Bevy was, unfortunately, the worst of both worlds: slow iteration without safety.

  • I've always liked the concept of ECS, but I agree with this, although I have very limited experience with Bevy. If I were to write a game in Rust, I would most likely not choose ECS and Bevy because of two reasons: 1. Bevy will have lots of breaking changes as pointed in the post, and 2. ECS is almost always not required -- you can make performant games without ECS, and if with your own engine then you retain full control over breaking changes and API design compromises.

    I think all posts I have seen regarding migrating away from writing a game in Rust were using Bevy, which is interesting. I do think Bevy is awesome and great, but it's a complex project.

This is a personal project that had the specific goal of the person's brother, who was not a coder, being able to contribute to the project. On top of that, they felt the need to continuously upgrade to the latest version of the underlying game engine instead of locking to a version.

I have worked as a professional dev at game studios many would recognize. Those studios which used Unity didn't even upgrade Unity versions often unless a specific breaking bug got fixed. Same for those studios which used DirectX. Often a game shipped with a version of the underlying tech that was hard locked to something several years old.

The other points in the article are all valid, but the two factors above held the greatest weight as to why the project needed to switch (and the article says so -- it was an API change in Bevy that was "the straw that broke the camel's back").

Love to have this comparison analysis. Huge LOC difference between Rust and C# (64k -> 17k!!!) though I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust.

  • > I am sure that is mostly access to additional external libraries that did things they wrote by hand in Rust

    This is the biggest reason I push for C#/.NET in "serious business" where concerns like auditing and compliance are non-negotiable aspects of the software engineering process. Virtually all of the batteries are included already.

    For example, which 3rd party vendors we use to build products is something that customers in sectors like banking care deeply about. No one is going to install your SaaS product inside their sacred walled garden if it depends on parties they don't already trust or can't easily vet themselves. Microsoft is a party that virtually everyone can get on board with in these contexts. No one has to jump through a bunch of hoops to explain why the bank should trust System or Microsoft namespaces. Having ~everything you need already included makes it an obvious choice if you are serious about approaching highly sensitive customers.

    • I worked in a regulated space at one time, and my understanding is that this is a big reason they chose .NET over Java. Java relies a lot more on third-party libraries, which makes getting things certified harder.

      Log4shell was a good example of a relative strength of .NET in this area. If a comparable bug had happened in .NET's standard logging tooling, we likely would have seen all of the first-party .NET framework patched fairly shortly after, in a single coordinated release that we could upgrade to with minimal fuss. Meanwhile, at my current job we've still got standing exceptions allowing vulnerable version of log4j in certain services because they depend on some package that still has a hard dependency on a vulnerable version, which they in turn say they can't fix yet because they're waiting on one of their transitive dependencies to fix it, and so on. We can (and do) run periodic audits to confirm that the vulnerable parts of log4j aren't being used, but being able to put the whole thing in the past within a week or two would be vastly preferable to still having to actively worry about it 5 years later.

      The relative conciseness of C# code that the parent poster mentioned was also a factor. Just shooting from the hip, I'd guess that I can get the same job done in about 2/3 as much code when I'm using C# instead of Java. Assuming that's accurate, that means that with Java we'd have had 50% more code to certify, 50% more code to maintain, 50% more code to re-certify as part of maintenance...

      3 replies →

    • Hugely underrated aspect of .NET. If a CVE surfaces, there's a team a Microsoft that owns the code and is going to patch and ship a fix.

    • In sectors that are critical here in the EU, nobody allows c# and microsoft due to licensing woes longterm. It's java and foss all the way down. SaaS also is not a thing unless it runs on prem.

      4 replies →

  • C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.

        > The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance.
    

    This is also a highly underrated aspect of C# in that its surface area has largely remained stable from v1 (few breaking changes (though there are some valid complaints that surface from this with regards to keyword bloat!)). So the historical volume of extremely well-written documentation is a boon for LLMs. While you may get out-dated patterns (e.g. not using latest language features for terseness), you will not likely get non-working code because of the large and stable set of first party dependencies (whereas outdated 3rd party dependencies in Node often leads to breaking incompatibilities with the latest packages on NPM).

        > It was also a huge boost to his confidence and contributed to a new feeling of momentum. I should point out that Blake had never written C# before.
    

    Often overlooked with C# is its killer feature: productivity. Yes, when you get a "batteries included" framework and those "batteries" are quite good, you can be productive. Having a centralized repository for first party documentation is also a huge boon for productivity. When you have an extremely broad, well-written, well-organized standard library and first party libraries, it's very easy to ramp up productivity versus finding different 3rd party packages to fill gaps. Entity Framework, for example, feels miles better to me than Prisma, TypeORM, Drizzle, or any option on Node.js. Having first party rate limiting libraries OOB for web APIs is great for productivity. Same for having first party OpenAPI schema generators.

    Less time wasted sifting through half-baked solutions.

        > Code size shrank substantially, massively improving maintainability. As far as I can tell, most of this savings was just in the elimination of ECS boilerplate.
    

    C# has three "super powers" to reduce code bloat which is its really rich runtime reflection, first-class expression trees, and Roslyn source generators to generate code on the fly. Used correctly, this can remove a lot of boilerplate and "templatey" code.

    ---

    I make the case that many teams that outgrow JS/TS on Node.js should look to C# because of its congruence to TS[0] before Go, Java, Kotlin, and certainly not Rust.

    [0] https://typescript-is-like-csharp.chrlschn.dev/

    • > C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way. C#'s terseness has not come at the cost of its legibility and in fact, I feel like enhances it in many cases.

      C# and .net are one of the most mature platform for development of all kind. It's just that online, it carries some sort of anti Microsoft stigma...

      But a lot of AA or indie games are written in C# and they do fine. It's not just C++ or Rust in that industry.

      People tend to be influenced by opinions online but often the real world is completely different. Been using C# for a decade now and it's one of the most productive language I have ever used, easy to set up, powerful toolchains... and yes a lot of closed source libs in the .net ecosystem but the open source community is large too by now.

      6 replies →

    • C# has aged better but I feel like Java 8 approaching ANSI C level solid tools. If only Swing wasn't so ugly. They should poach Raymond Chen to make Java 8 Remastered I like his blog posts. There's probably a DOS joke in there. Also they should just use the JavaFX namespace so I don't have to change my code and I want the lawyer here to laugh too.

      5 replies →

    • > C# is a very highly underrated (and oft misunderstood) language that has become more terse as it has aged -- in a very good way

      One negative aspect is that if you haven't kept up, that terseness can be a bit of a brick wall. Many of the newer features, especially things where the .Net framework just takes over and solves your problem for you in a "convention over configuration" kinda way, are extremely terse. Modern C# can have a bit of a learning curve.

      C# is an underrate language for sure and once you get going it is an absolute joy to work in. The .Net platform also gives you all the cross-platform and ease of deployment features of languages like Go. Ignoring C#/.Net because it's Microsoft is a bit of a mistake.

    • C# is a great language, but it's been hampered by slow transition towards AOT.

      My understanding (not having used it much, precisely because of this) is that AOT is still quite lacking; not very performant and not so seamless when it comes to cross-platform targeting. Do you know if things have gotten better recently?

      I think fhat Microsoft had dropped the old .NET platform (CLR and so on) sooner and really nailed the AOT experience, they may have had a chance at competing with Go and even Rust and C++ for some things, but I suspect that ship has sailed, as it has for languages like D and Nim.

      2 replies →

  • The article says it's 64k -> 17k.

    • That's not unexpected they went from Bevy which is more of a game framework, than a proper GUI engine.

      I mean, you could also write how we went from C# code 1mil code of our mostly custom engine to 10k in Unreal C++.

Related: https://news.ycombinator.com/item?id=40172033 - Leaving Rust gamedev after 3 years (982 comments) - 4/26/2024

  • https://loglog.games/blog/leaving-rust-gamedev/#hot-reloadin...

    Hot reloading! Iteration!

    A friend of mine wrote an article 25+ years ago about using C++ based scripting (compiles to C++). My friend is super smart engineer, but I don't think he was thinking of those poor scripters that would have to wait on iteration times. Granted 25 years ago the teams were small, but nowadays the amount of scripters you would have on AAA game is probably dozen if not two or three dozen and even more!

    Imagine all of them waiting on compile... Or trying to deal with correctness, etc.

Good for them.

From a dev perspective, I think, Rust and Bevy are the right direction, but after reading this account, Bevy probably isn't there yet.

For a long time, Unity games felt sluggish and bloated, but somehow they got that fixed. I played some games lately that run pretty smoothly on decade old hardware.

I love Rust and wanted to use it for gamedev but I just had to admit to myself that it wasn't a good fit. Rust is a very good choice for user space systems level programming (ie. compilers, proxies, databases etc.). For gamedev, all of the explicitness that Rust requires around ownership/borrowing and types tends to just get in the way and not provide a lot of value. Games should be built to be fast, but the programmer should be able to focus almost completely on game logic rather than low-level details.

  • Bevy solves the ownership/borrowing issues entirely with its ECS design though.

    I had two groups students (complete Rust beginners) ship a basic FPS and Tower Defense as learning project using Bevy and their feedback was that they didn't fight the language at all.

    The problem that remains is that as soon a you go from a toy game to an actual one, you'd realize that Bevy still has tons of work to do before it can be considered productive.

Unity is still probably the best game engine for smaller games with Unreal being better for AAA.

The problem is you make a deal with the devil. You end up shipping a binary full of phone home spyware, if you don't use Unity in the exact way the general license intends they can and will try to force you into the more expensive industrial license.

However, the ease of actually shipping a game can't be matched.

Godot has a bunch of issues all over the place, a community more intent on self praise than actually building games. It's free and cool though.

I don't really enjoy Godot like I enjoy Unity , but I've been using Unity for over a decade. I might just need to get over it.

GC isn't a big problem for many types of apps/games, and most games don't care about memory safety. Rust's advantages aren't so important in this domain, while its complexity remains. No surprise he prefers C# for this.

  • Disagree on both points. Anyone who has shipped a game in unity has dealt with object pooling, flipping to structs instead of classes, string interpolation, and replacing idiomatic APIs with out parameters of reused collections.

    Similarly, anyone who has shipped a game in unreal will know that memory issues are absolutely rampant during development.

    But, the cure rust presents to solve these for games is worse than the disease it seems. I don’t have a magic bullet either..

    • This is a mostly Unity-specific issue. Unity unfortunately has a potato for a GC. This is not even an exaggeration - it uses Boehm GC. Unity does not support Mono's better GC (SGen). .NET has an even better GC (and JIT) that Unity can't take advantage of because they are built on Mono still.

      Other game engines exist which use C# with .NET or at least Mono's better GC. When using these engines a few allocations won't turn your game into a stuttery mess.

      Just wanted to make it clear that C# is not the issue - just the engine most people use, including the topic of this thread, is the main issue.

    • I'm shocked that Beat Saber is written in C# & Unity. That's probably the most timing sensitive game in the world, and they've somehow pulled it off.

      3 replies →

  • Not just GC -- performance in general is a total non-issue for a 2d tile-based game. You just don't need the low-level control that Rust or C++ gives you.

    • I wouldn't say it's a non-issue. I've played 2D tile-based, pixel art games where the framerate dropped noticeably with too many sprites on screen, even though it felt like a 3DS should have been able to run it, and my computer isn't super low-end, either. You have more leeway, but it's possible to badly make optimized 2D games to the point where performance becomes an issue again.

      1 reply →

> I failed to fairly evaluate my options at the start of the project.

The more projects I do, the more time I find that I dedicate to just planning things up front. Sometimes it's fun to just open a game engine and start playing with it (I too have an unfair bias in this area, but towards Godot [https://godotengine.org/]), but if I ever want to build something to release, I start with a spreadsheet.

  • Do you think you needed to have those times to play around in the engine? Can a beginner possibly even know what to plan for if they don't fully understand the game engine itself? I am older so I know the benefits of planning, but I sometimes find that I need to persuade myself to plan a little less, just to get myself more in tune with the idioms and behaviors of the tool I am working in.

    • I think even if you don't have much experience with tools, you can still plan effectively, especially now with LLMs that can give you an idea of what you're in for.

      But if you're doing something for fun, then you definitely don't need much planning, if any - the project will probably be abandoned halfway through anyways :)

Excellent write-up.

On the topic of rapid prototyping: most successful game engines I'm aware of hit this issue eventually. They eventually solve it by dividing into infrastructure (implemented in your low-level lanuage) and game-logic / application logic / scripting (implemented in something far more flexible and, usually, interpreted; I've seen Lua used for this, Python, JavaScript, and I think Unity's C# also fits this category?).

For any engine that would have used C++ instead, I can't think of a good reason to not use Rust, but most games with an engine aren't written in 100% C++.

I love Rust, but I would not try to make a full fledged game with it without patience. This post is not so much a moving away from Rust as much as Bevy is not enjoyable in its current form.

Bevy is in its early stages. I'm sure more Rust Game Engines will come up and make it easier. That said, Godot was great experience for me but doesn't run on mobile well for what I was making. I enjoy using Flutter Flame now (honestly different game engines for different genres or preference), but as Godot continues to get better, I personally would use Godot. Try Unity or Unreal as well if I just want to focus on making a game and less on engine quirks and bugs.

Related: just tried to switch to Rust when starting a new project. The main motivation was the combination of fearless concurrency and exhaustive error handling - things that were very painful in the more mature endeavor.

Gave up after 3 days for 3 reasons:

1. Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling and a few astronomical units away from Visual Studio. Extract function barely works.

2. Crates with non-Rust dependencies are nearly impossible to debug as debuggers don't evaluate expressions. So, if you have a Rust wrapper for Ogg reader, you can't look at ogg_file.duration() in the debugger because that requires function evaluation.

3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.

With these roadblocks I would never have gotten the "mature" project to the point, where dealing with hard to debug concurrency issues and funky unforeseen errors became necessary.

  • > 3. In contrast to .NET and NuGet ecosystem, non-Rust dependencies typically don't ship with precompiled binaries, meaning you basically have to have fun getting the right C++ compilers, CMake, sometimes even external SDKs and manually setting up your environment variables to get them to build.

    Depending on your scenario, you may want either one or another. Shipping pre-compiled binaries carries its own risks and you are at the mercy of the library author making sure to include the one for your platform. I found wiring up MSBuild to be more painful than the way it is done in Rust with cc crate, often I would prefer for the package to also build its other-language components for my specific platform, with extra optimization flags I passed in.

    But yes, in .NET it creates sort of an impedance mismatch since all the managed code assemblies you get from your dependencies are portable and debuggable, and if you want to publish an application for a specific new target, with those it just works, be it FreeBSD or WASM. At the same time, when it works - it's nicer than having to build everything from scratch.

    • The big advantage of precompiled is that hundreds of people who downloaded the package don't have to figure out building steps over and over again.

      Risks are real though.

  • Refactoring and IDE tooling in general are still lightyears away from JetBrains tooling

    How long ago was this and did you try JetBrains RustRover? While not quite as mature as some other JetBrains tools, I've found the latest version really quite good.

    • About 15 hours ago. I was switching between RustRover and VS Code + Rust Analyzer. Not quite mature is an understatement. All said above applies to RustRover.

  • Curious what kind of project that was. Were you making a GUI by any chance?

    • No, the new project that I tried Rust for is a voice API (VAD, Whisper, etc). Got disappointed because, for example, the codec is just a wrapper around libopus. So it doesn't provide safety guarantees, and finding a crate that would build without issues was a challenge.

I worked on games for 20 years and was always interested in alternative languages to C and C++ for the purpose.

Java was my first hope. It was a bit safer than C++ but ultimately too verbose and the GC meant too much memory is wasted. Most games were very sensitive to memory use because consoles always had limited memory to keep costs down.

Next I spent years of side projects on Common Lisp based on Andy Gavin’s success there with Crash Bandicoot and more, showing it was possible to do. However, reports from the company were that it was hard to scale to more people and eventually a rewrite of the engine in C++ came.

I have explored Rust and Bevy. Bevy is bleeding edge and that’s okay, but Rust is not the right language. The focus on safety makes coding slow when you want it to be fast. The borrow checker frowns when you want to mutate things for speed.

In my opinion Zig is the most promising language for triple A game dev. If you are mid level stick to Godot and Unity, but if you want to build a fast, safe game engine then look at Zig first.

That's an excellent article - it's great when people share not only their victories, but mistakes, and what they learned from them.

That said regarding both rapid gameplay mechanic iteration and modding - would that not generally be solved via a scripting language on top of the core engine? Or is Rust + Bevy not supposed to be engine-level development, and actually supposed to solve the gameplay development use-case too? This is very much not my area of expertise, I'm just genuinely curious.

  • It does solve the gameplay development use case too. Bevy encourages using lots of small 'systems' to build out logic. These are functions that can spawn entities or query for entities in the game world and modify them and there's also a way to schedule when these systems should run.

    I don't think Bevy has a built-in way to integrate with other languages like Godot does, it's probably too early in the project's life for that to be on the roadmap.

Are scripting languages not a thing in gamedev anymore?

I feel most of the things mentioned (rapid prototyping, ease of use for new programmers, modability) would be more easily accomplished by embedding a Lua interpreter in the rust project.

Glad C# is working out for them though, but if anyone else finds themselves in this situaton in Rust, or C, C++, Zig, whatever - embedding lua might be something else to consider, that requires less re-writing.

Congrats on the rewrite!

I think the worst issue was the lack of ready-made solution. Those 67k lines in Rust contains a good chunk of a game engine.

The second worst issue was that you targeted an unstable framework - I would have focused on a single version and shipped the entire game with it, no matter how good the goodies in the new version.

I know it's likely the last thing you want to do, but you might be in a great position to improve Bevy. I understand open sourcing it comes with IP challenges, but it would be good to find a champion with read access within Bevy to parse your code and come up with OSS packages (cleaned up with any specific game logic) based on the countless problems you must have solved in those extra 50k lines.

It sounds to me that it may have been better to limit performance-critical parts to Rust and write the actual game in something like Lua (embedded in Rust)?

That's the approach I've been taking with a side project game for the very reason alone that the other contributors are not system programmers. I.e. a similar situation as the author had with his brother.

Rust was simply not an option -- or I would be the only one writing code. :]

And yeah, as others mentioned: Fyrox over Bevy if you have few (or one) Rust dev(s). It just seems Fyrox is not on the radar of many Rust people even. Maybe because Bevy just gets a lot more (press) coverage/enthusiasm/has more contributors?

  • But why, when you can write performance-critical parts in low-level C# (with structs, stackalloc etc) and game logic in high-level object-oriented C#, and have seamless interop between the two with no effort at all?

>I wanted UI to be easy to build, fast to iterate, and moddable. This was an area where we learned a lot in Rust and again had a good mental model for comparison.

I feel like this harkens to the general principle of being a software developer and not an "<insert-language-here>" developer.

Choose tools that expose you to more patterns and help to further develop your taste. Don't fixate on a particular syntax.

Using Rust in a project felt less like implementing ideas and more like committing to learning the language in depth. Most projects involve messy iteration and frequent failure. Doing that in Rust is painful. Starting a greenfield project in it feels more like a struggle with the language than progress on the actual idea unless you're a Rust enthusiast.

To which extent was the implementation in C# benefitting off both the clarified requirements (so the Rust experience could be seen more as prototyping mixed with production)? Was it actually in major parts just a major refactor in a different language (admittedly with much more proven elements)?

  • I bet a C# to C# rewrite would also have been quick and led to a cleaner codebase. Especially from C#-as-written-by-beginners...

Nice write up! Nevertheless, these are very specific circumstances:

* They didn't select Rust as the best tool available to create a game, they decided to create a Rust project which happens to be a game

* When the objective and mental model of a solution is clear, the execution is trivial. I bet I could recreate a software which took me 3 months to develop in 3 days, if I just have to retype the solution instead of finding a solution. No matter which language

* They seem to struggle with the most trivial of tasks. Having to call out being able to utilize an A* library (an algorithm worth like 10 lines of code) or struggling with scripting (trivial with proven systems like lua) suggests a quite novice team

That being said, I'm glad for their journey:)

Rust is fine as a low-level systems programming language. It's a huge improvement over C and (because memory safety) a decent improvement over C++. However, most applications don't need a low-level systems programming language, and trying to shoehorn one where it doesn't belong just leads to sadness without commensurate benefit. Rust does not

* automatically make your program fast;

* eliminate memory leaks;

* eliminate deadlocks; or

* enforce your logical invariants for you.

Sometimes people mention that independent of performance and safety, Rust's pattern-matching and its traits system allow them to express logic in a clean way at least partially checked at compile time. And that's true! But other languages also have powerful type systems and expressive syntax, and these other languages don't pay the complexity penalty inherent in combining safety and manual memory management because they use automatic memory management instead --- and for the better, since the vast majority of programs out there don't need manual memory management.

I mean, sure, you can Arc<Box<Whatever>> many of your problems away, but that point, your global reference counting just becomes a crude form of manual garbage collection. You'd be better off with a finely-tuned garbage collector instead --- one like Unity (via the CLR and Mono) has.

And you're not really giving anything up this way either. If you have some compute kernel that's a bottleneck, thanks to easy FFIs these high-level languages have, you can just write that one bit of code in a lower-level language without bringing systems consideration to your whole program.

  • I completely agree with you—Rust is not well-suited for application development. Application development requires rapid iteration, acceptable performance, and most importantly, a large developer community and a rich software ecosystem.

    Languages like Go , JavaScript, C# or Java are much better choices for this purpose. Rust is still best suited for scenarios where traditional system languages excel, such as embedded systems or infrastructure software that needs to run for extended periods.

Aren't there some scripting languages designed around seamless interop with Rust that could be used here for scripting/prototyping? Not that it would fix all the issues in that blog post, but maybe some of them.

> Bevy is young and changes quickly. Each update brought with it incredible features, but also a substantial amount of API thrash

> Bevy is still in the early stages of development. Important features are missing. Documentation is sparse. A new version of Bevy containing breaking changes to the API is released approximately once every 3 months.

I would choose Bevy if and only if I would like to be heavily involved in the development of Bevy itself.

And never for anything that requires a steady foundation.

Programming language does not matter. Choose the right tool for job and be pragmatic.

API churn is so expensive, largely unnecessary, and rarely value-add. It's an anti-pattern that makes things otherwise promising things unusable.

I completely understand, and it's not the first time I've heard of people switching from Bevy to Unity. btw Bevy 0.16 just came out in case you missed the discussion:

https://glicol.org) after 2 years. I start from embedded devices, switching to crates like Chumsky, and I feel the ecosystem has improved a lot compared to before.

So I still have 100% confidence in Rust.

Man, they seems kinda cracked. He migrated each of the subsystem experiments in about one day each having never used Unity before?

I've ported code between engines, and that makes my productivity feel very... leisurely.

Also, it's endearing that he builds things with his brother including that TF2 map that he linked from years ago.

For anyone considering Rust for gamedev check out the Fyrox engine

https://fyrox.rs/

here's a web demo

https://fyrox.rs/assets/demo/animation/index.html

Rust is not good for video game gameplay logic. The ownership model of Rust can not represent the vast majority of allocations.

I love Rust. It’s not for shipping video games. No Tiny Glade doesn’t count.

Edit: don’t know why you’re downvoting. I love Rust. I use it at my job and look for ways to use it more. I’ve also shipped a lot of games. And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.

Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.

  • > No Tiny Glade doesn’t count.

    > And if you look at Steam there are simply zero Rust made games in the top 2000. Zero. None nada zilch.

    Well, sure, if you arbitrarily exclude the popular game written in Rust, then of course there are no popular games written in Rust :)

    > And maybe not Switch although I’m less certain.

    I have talked to Nintendo SDK engineers about this and been told Rust is fine. It's not an official part of their toolchain, but if you can make Rust work they don't care.

    • Yeah in my haste I mixed up my rants. The bane of typing at work inbetween things.

      Tiny Glade is indeed a rust game. So there is one! I am not aware of a second. But it’s not really a Bevy game. It uses the ECS crate from Bevy.

      Egg on my face. Regrets.

      1 reply →

  • > The ownership model of Rust can not represent the vast majority of allocations.

    What allocations can you not do in Rust?

    • Gameplay code is a big bag of mutable data that lives for relatively unknown amounts of time. This is the antithesis of Rust.

      The Unity GameObject/Component model is pretty good. It’s very simple. And clearly very successful. This architecture can not be represented in Rust. There are a dozen ECS crates but no one has replicated the worlds most popular gameplay system architecture. Because they can’t.

      3 replies →

  • You could probably write the core in Rust and use some sort of scripting for gameplay logic. Warframe's gameplay logic is written in Lua.

  • The headline is a bit sensational here and shall have been rather called "Migrating away from Bevy" .. That's not (really) comparing C# to Rust (and Luna but that one is missing), but rather comparing game engine where the language is secondary. Obviously Unity is the leader here (with Unreal) - despite all its flaws.

  • > No Tiny Glade doesn’t count.

    Tiny Glade is also the buggiest Steam game I've ever encountered (bugs from disappearing cursor to not launching at all). Incredibly poor performance as well for a low poly game, even if it has fancy lighting...

  • > Also you’re strictly forbidden from shipping Rust code on PlayStation. So if you have a breakout indie hit on Steam in Rust (which has never happened) you can’t ship it on PS5. And maybe not Switch although I’m less certain.

    What evidence do you have for this statement? It kind of doesn't make any sense on its face. Binaries are binaries, no matter what tools are used to compile them. Sure, you might need to use whatever platform-specific SDK stuff to sign the binary or whatever, but why would Rust in particular be singled out as being forbidden?

    Despite not being yet released publicly, Jai can compile code for PlayStation, Xbox, and Switch platforms (with platform-specific modules not included in the beta release, available upon request provided proof of platform SDK access).

    • Sony mandates you use their toolchain. You don’t get to ship whatever you want on their console. They have a very thorough TRC check you must pass before you get to ship.

      1 reply →

  • Isn't Veloren doing pretty good?

    • No. No one plays Veloren. It’s a toy project for programmers.

      No offense to the project. It’s cool and I’m glad it exists. But if you were to plot the top 2000 games on Steam by time played there are, I believe, precisely zero written in Rust.

  • > Rust can not represent the vast majority of allocations

    Do you mean cyclic types?

    Rust being low-level, nobody prevents one from implementing garbage-collected types, and I've been looking into this myself: https://github.com/Manishearth/rust-gc

    It's "Simple tracing (mark and sweep) garbage collector for Rust", which allows cyclic allocations with simple `Gc<Foo>` syntax. Can't vouch for that implementation, but something like this would be good for many cases.

Unity is predatorial. I work in a small studio which is part of a larger company (only 5 of us use Unity) and they have suddenly decided to hold our accounts hostage until we upgrade to Industry license because of the revenue our parent company makes even though that's completely separate cash flow versus what our studio actually works with. Industry license is $5000 PER SEAT PER YEAR. Absolute batshit crazy expensive for a single piece of software. We will never be able to afford that. So we are switching over to Unreal. It's really sad what Unity has become.

  • Definitely not cheap, but I assume developer cost and migrating to unreal is probably not cheap either. I'm not too familiar with either engine, are they similar enough that it's "cheaper" to migrate? I imagine that sets back release dates as well.

    Such a crappy thing for a company to do.

  • That's BS, does your team of 5 work for free?

    Imagine you all cost 100k/year to employ by the larger company (since you all apparently don't make money).

    Then imagine you are all now cost 105k a year to the parent company.

    It's no difference.

This can be summarized in a simple way: UI is totally, another world.

There is not chance for any language, not matter how good is it, to match the most horrendous (web!) but full-featured ui toolkit.

I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.

---

I made a decision that has become the most productive and problem-less approach of make UIs in my 30 years doing this:

1- Use the de-facto UI toolkit as-is (html, swift ui, jetpack compose). Ignore any tool that promise cross-platform UI (so that is html, but I mean: I don't try to do html in swift, ok?).

2- Use the same idea of html: Send plain data with the full fidelity of what you wanna render: Label(text=.., size=..).

3- Render it directly from the native UI toolkit.

Yes, this is more or less htmx/tailwindcss (I get the inspiration from them).

This mean my logic is full Rust, I pass serializable structs to the UI front-end and render directly from it. Critically, the UI toolkit is nearly devoid of any logic more complex that what you see in a mustache template language.. Not do the localization, formatting, etc. Only UI composition.

I don't care that I need to code in different ways, different apis, different flows, and visually divergent UIs.

IS GREAT.

After the pain of boilerplate, doing the next screen/component/wwhatever is so ridiculous simple that is like cheating.

So, the problem is not Rust. Is not F#, or Lisp. Is that UI is a kind of beast that is imperious to be improved by language alone.

  • > I bet, 1000%, that is easier to do a OS, a database engine, etc that try to match QT, Delphi, Unity, etc.

    I 100% agree. A modern mature UI toolkit is at least equivalent to a modern game engine in difficulty. GitHub is strewn with the corpses of abandoned FOSS UI toolkits that got 80% of the way there only to discover that the other 20% of the problem is actually 20000% of the work.

    The only way you have a chance developing a UI toolkit is to start in full self awareness of just how hard this is going to be. Saying "I am going to develop a modern UI toolkit" is like saying "I am going to develop a complete operating system."

    Even worse: a lot of the work that goes into a good UI toolkit is the kind of work programmers hate: endless fixing of nit-picky edge case bugs, implementation of standards, and catering to user needs that do not overlap with one's own preferences.

  • I disagree. The issue, which the article mentions, is iteration time. They were having issues iterating on gameplay, not UI. My own experiences with game dev and Rust (which are separate experiences, I should add) resonate with what the article is expressing. Iterating systems is common in gamedev and Rust is slow to iterate because its precision ossifies systems. This is GREAT for safety, it's crap for momentum and fluidity

    • This is why game engines embedded scripting languages. Who gives a crap if the engine takes 12 hours to compile if 80% of the team are writing lua in a hot reload loop.

      2 replies →

  • Would you happen to have (sample) or open-source Rust code out there demonstrating this approach? I'm very curious to learn more.

    For example; if you have a progressbar that needs to be updated continuously, you do what? Upon every `tick` of your Rust engine you send a new struct with `ProgressBar(percentage=x)`? Or do the structs have unique identifiers so that the UI code can just update that one element and its properties instead of re-rendering the entire screen?

Throwing someone who is new to coding straight into rust AND game dev is pretty ambitious

But yeah my first thought here was Lua too like others said

Professional high-performance C++ game engine dev here. At a glance, their game looks great. But, to be frank, it also looks like it could have been made in the DOS era with sufficient effort.

Going hard with Rust ECS was not the appropriate choice here. Even a 1000x speed hit would be preferable if it gained speed of development. C# and Unity is a much smarter path for this particular game.

But, that’s not a knock on Rust. It’s just “Right tool for the job.”

For my going on 5 year side game project, this is why I can only write in vanilla tools (java, typescript) and with small libraries that are easy to replace. I would loose all motivation if I had to refactor my game and update the engine every time I come back to it. But also, I don't have the pressure of ever finishing the game...

Migrating away from Bevy is the main thrust.

Rust is a niche language, there is no evidence it is going to do well in the game space.

Unity and C# sound like a much better business choice for this. Choosing a system/language....

> My love of Rust and Bevy meant that I would be willing to bear some pain

....that is not a good business case.

Maybe one day there will be a Rust game engine that can compete with Unity, probably already are, in niches.

The best language for game logic is lua, switching to C# probably isnt going to help any.... IMHO.

  • What makes Lua the best for game logic? You don't even have types to help you out with Lua.

    • Yeah, I actually recently tried making a game in Lua using LOVE2D, and then making the same one in C with Raylib, and I didn't feel like Lua itself gave me all that much. I don't think Lua is best for game logic so much as it's the easiest language to embed in a game written in C or C++. That said, maybe some of its unique features, like its coroutines, or stuff relating to metatables, could be useful in defining game logic. I was writing very boring, procedural, occasionally somewhat object-oriented code either way.

      1 reply →

    • Stuff that hooked me:

      you integrate it tightly with the engine so it only does game logic, making files small and very quick and easy to read.

      platform independent, no compiling, so can modify it in place on a release build of the game.

      the "everything is a table" approach is very easy to concept mentally means even very inexperienced coders can get up and running quickly.

      great exception handling, which makes most release time bugs very easy to diagnose and fix.

      All of which means you spend far more time building game logic and much, much less time fighting the programming language.

      Heres my example of a 744 flight data recorder (the rest of the 744 logic is in the containing folders)

      https://github.com/mSparks43/747-400/blob/master/plugins/xtl...

      All asynchronously multithreaded, 100% OS independent.

      2 replies →

I realize there were bigger problems, but this makes me very sad:

  Learning - Over the past year my workflow has changed immensely, and I regularly use AI to learn new technologies, discuss methods and techniques, review code, etc. The maturity and vast amount of stable historical data for C# and the Unity API mean that tools like Gemini consistently provide highly relevant guidance. While Bevy and Rust evolve rapidly - which is exciting and motivating - the pace means AI knowledge lags behind, reducing the efficiency gains I have come to expect from AI assisted development. This could change with the introduction of more modern tool-enabled models, but I found it to be a distraction and an unexpected additional cost.

In 2023 I wondered if LLM code generation would throttle progress in programming language design. I was particularly thinking about Idris and other dependently-typed languages which can do deterministically correct code generation. But it applies to any form of language innovation: why spend time learning a new programming language that 100% reliably abstracts boilerplate away, when an LLM can 95% reliably slop the boilerplate? Some people (me) will say that this is unacceptably lazy and programmers should spend time reading things, the other will point to the expected value of dev costs or whatever. Very depressing.

Using poor quality AI suggestions as a reason not to use Rust is a super weird argument. Something is very wrong with such idea. What's going to be next, avoiding everything where AI performs poorly?

Scripting being flexible is a proper idea, but that's not an argument against Rust either. Rather it's an argument for more separation between scripting machinery and the core engine.

For example Godot allows using Rust for game logic if you don't want to use GDScript, and it's not really messing up the design of their core engine. It's just more work to allow such flexibility of course.

The rest of the arguments are more in the familiarity / learning curve group, so nothing new in that sense (Rust is not the easiest language).

  • Yes, a lot of people are reasonably going to decide to work in environments that are more legible to LLMs. Why would that surprise you?

    The rest of your comment boils down to "skills issue". I mean, OK. But you can say that about any programming environment, including writing in raw assembly.

  • it could be a weird argument, but as a rust newcomer, i have to say it's really something that jumps to your face. LLMs are practically useless for anything non-basic, and rust contains a lot non-basic things.

    • So, what are the chances that the pendulum swings to lower-level programming via LLM-generated C/C++ if LLM-generated Rust doesn't emerge? Note that this question is a context switch from gaming to something larger. For gaming, it could easily be that the engine and culture around it (frequent regressions, etc) are the bigger problems than the language.

      1 reply →

  • Developers often pick languages and libraries based on the strength of their developer tools. Having great dev tools was a major reason Ruby on Rails took off, for example.

    Why exclude AI dev tools from this decision making? If you don’t find such tools useful, then great, don’t use them. But not everybody feels the same way.

  • It's a weird idea now, but it won't be weird soon. As devs and organizations further buy into AI-first coding, anything not well-served by AI will be treated as second-class. Another thread here brought up the risk that AI will limit innovation by not being well-trained on new things.

    • I agree that such trend exists, but it's extremely unhealthy and if anyone, developers should have more clue how bad it is.

I wonder why Godot wasn't picked. Did I miss the points in the article?

  • > We wrote extensive pros and cons, emphasizing how each option fared by the criteria above: Collaboration, Abstraction, Migration, Learning, and Modding.

    Would you really expect Godot to win out over Unity given those priorities? Godot is pretty awesome these days, but it's still going to be behind for those priorities vs. Unity or Unreal.

  • I also would have liked to have seen the pro/con lists for each of the potential choices.

    I've been toying with the idea of making a 2d game that I've had on my mind for awhile, but have no game development experience, and am having trouble deciding where to start (obviously wanting to avoid the author's predicament of choosing something and having to switch down the line).

    • The key is, you gotta be pretty cold in the analysis. It's probably more important to avoid what you hate than to lean in too hard to what you love, unless your terminal goal is to work in $FAVE_LANG. Too many people claim they want to make a game, but their actions show that their terminal goal was actually to work in their favorite language. I don't care if your goal is just to work in your favorite language, I just think you need to be brutally honest with yourself on that front.

      Probably the best thing in your case is, look at the top three engines you could consider, spend maybe four hours gather what look like pros and cons, then just pick one and go. Don't overestimate your attachment to your first choice. You'll learn more just in finishing a tutorial for any of them then you can possibly learn with analysis in advance.

      4 replies →

  • I wondered the same - the separate C# build might be a bit of a hassle still though.

    But they also could have combined Rust parts and C# parts if they needed to keep some of what they had.

  • One of the complaints in the article was using a framework early in it's dev cycle. I imagine they were just picking what is safe at that point and didn't want to get burned again.

I signed up for the mailing list. The game looks interesting, I hope there is a Mac version in the future.

wow every rust topics have uncountable number of comments, it's indeed a successful language

Expect many more commits like #12. ;)

  • Awww that's not fair.

    C# actually has fairly good null-checking now. Older projects would have to migrate some code to take advantage of it, but new projects are pretty much using it by default.

    I'm not sure what the situation is with Unity though - aren't they usually a few versions behind the latest?

The "Learning" point drives home a concern my brother-in-law and I were talking about recently. As LLMs become more entrenched as a tool, they may inevitably become the crutch that actually holds back innovation. Individuals and teams may be hesitant to explore or adopt bleeding edge technologies specifically because LLMs don't know about them or don't know enough about them yet.

  • I see this quite a bit with Rust. I honestly cringe when people get up in arms about someone taking their project out of the rust community.

    The same can be said of books as of programming languages:

    "Not every ___ deserves to be read/used"

    If the documentation or learning curve is so high and/or convoluted that it's disparaging to newcomers then perhaps it's just not a language that's fit for widespread adoption. That's actually fine.

    "Thanks for your work on the language, but this one just isn't for me" "Thanks for writing that awfully long book, but this one just isn't for me"

    There's no harm in saying either of those statements. You shouldn't be disparaged for saying that rust just didn't work out for your case. More power to the author.

    • Rust attracts a religious fervour that you'll almost never see associated with any other language. That's why posts like this make the front page and receive over 200 comments.

      If you switched from Java to C# or vice versa, nobody would care.

      1 reply →

  • How is that different from choosing not to adopt a technology because it’s not widely used therefore not widely documented? It’s the timeless mantra of “use boring tech” that seems to resurface every once in a while. It’s all about the goal: do you want to build a viable product, quickly, or do you want to learn and contribute to a specific tech stack? That’s the trade off most of the time.

    • It's a lot worse. A high quality project can have great documentation and guides that make it easy to use for a human, but an LLM won't until there's a lot of code and documents out there using it.

      And if it's not already popular, that won't happen.

      3 replies →

  • I was actually meaning to post this as an Ask HN question, but never found the time to word it well. Basically, what happens to new frameworks and technologies in the age of widespread LLM-assisted coding? Will users be reluctants to adopt bleeding-edge tools because the LLMs can't assist as well? Will companies behind the big frameworks put more resources towards documenting them in a way that makes it easy for LLMs to learn from?

    • Actually, here in my corner of EU, only the prominent big tech backed well documented and battle tested tools are most marketable skills. So, React, 50 new jobs, but you worked with Svelte/Solidjs, what is that? Java/PHP/Python/Ruby/JS, adequate jobs. Go/Rust/Zig/Crystal/Nim, what are these? While Go has some popularity in recent years and I can spot Rust once in a blue moon. Anything involving requiring near metal work is always C/C++.

      Availability of documentation and tooling, widespread adaptation and access to already-trained-at-someone-else's-dime possibility is deemed safe for hiring decision. Sometimes, the narrow tech is spotted in the wild, but it was mostly some senior/staff engineer wanted to experiment something which became part of production because management saw no issue, will sometimes open some doors for practitioners of those stack but the probability is akin to getting hit by lightning strike.

      2 replies →

    • Another way to look at it: working bleeding edge will become a competitive advantage and a signal to how competent the team is. „Do they consume it” vs „do they own it”.

      2 replies →

    • This already happens. Is your new framework popular on GitHub and on Stack Overflow is a metric people use. LLMs are currently mostly capable of just adapting documentation, blog posts, and answers on SO. So they add a thin veneer on top of those resources.

    • This is already happening.

      On one hand, yes, when it comes to picking tools for new projects, LLM awareness of them is now a consideration in large companies.

      And at the same time, those same companies are willing to spend time and effort to ensure that their own tooling is well-represented in the training sets for SOTA models. To the point where they work directly with the corresponding teams at OpenAI etc.

      And yes, it does mean that the barrier to entry for new competitors is that much higher, especially when they don't have the resources to do the same.

    • I expect it will wind up like search engines where you either submit urls for indexing/inclusion or wait for a crawl to pick your information up.

      Until the tech catches up it will have a stifling effect on progress toward and adoption of new things (which imo is pretty common of new/immature tech, eg how culture has more generally kind of stagnated since the early 2000s)

    • Hopefully, tools can adapt to integrate documentation better. I've already run into this with GitHub Copilot, trying to use Svelte 5 with it is a battle despite it being released most of a year ago.

    • There’s another future where reasoning models get better with larger context windows, and you can throw a new programming language or framework at it and it will do a pretty good job.

  • We already have quite a lot of that effect with tooling. A language can't really get much traction until its got a build, packaging and all the IDE support we expect or however productive the language is it looses out in practice because its hard to work with and doesn't just fit into our CI/CD systems.

  • What innovation? Languages with curly braces versus BEGIN/END? There is no innovation going on in computer languages. Rust is C with better ergonomics and rigorous memory management. This was made possible with better processors which made more elaborate compilers practical. It all gets compiled by LLVM down to the same object code. I think we are moving to an era of "read-only" languages. Languages that have horrible writing ergonomics yet are easy to understand when read. Humans won't write code. They will review code.

  • Doesn't this mean that new tech will have to demonstrate material advantages, such that outweigh the LLM inertia, in order to be adopted? This sounds good to me; so much framework churn seems to be code fashion rather than function. Now if someone releases a new framework, they need to demonstrate real value first. People that are smart enough to read the docs and absorb the material of a new, better, framework will now have a competitive advantage; this all seems good.

  • I think it's a good point and I experienced the same thing when playing with SDL3 the other day. So even established languages with new API's can be problematic.

    However, I had a different takeaway when playing with Rust+AI. Having a language that has strict compile-time checks gave me more confidence in the code the AI was producing.

    I did see Cursor get in an infinite loop where it couldn't solve a borrow checker problem and it eventually asked me for help. I prefer that to burying a bug.

    • I had the same issue a few months ago when I was trying to ask LLMs about Box2D 3.0. I kept getting answers that were either for Box2D 2.x, or some horrific mashup of 2.x and 3.0.

      Now Box2D 3.1 has been released and there's zero chance any of the LLMs are going to emit any useful answers that integrate the newly introduced features and changes.

      1 reply →

  • Its not even innovation. I had a new Laravel project that i was chopping around to play with some new library and I couldn't the the dumbest stuff to work. Of course I went back to read the docs and - ah Laravel 19 or whatever is using config/boostrap.php again and no matter what chatgpt, or myself had figured, could understand why it wasnt working.

    unfortunately, a lot of libraries and services - well I don't think chatGPT understands the differences or it would be hard to. At least I have found that with writing scriplets for RT, PHP tooling, etc. The web world seems to move fast enough (and RT moves hella slow) that its confusing libraries and interfaces through the versions.

    It'd really need a wider project context where it can go look at how those includes, or functions, or whatever work instead of relying on 'built in' knowledge.

    "Assume you know nothing, go look at this tool, api endpoint or, whatever, read the code, and tell me how to use it"

  • I have that worry as well, but it may not be as bad as I feared. I am currently developing a Python serialization/deserialization library based on advanced multiple dispatch, so it is fairly different from how existing libraries work. Nonetheless, if I ask LLMs (using Cursor) to write new functionality or plugins within my framework, they are surprisingly adept at it, even with limited guidance. I expect it'll only get better in the next few years. Perhaps a set of AI directives and examples for new technologies would suffice.

    In any case, there has always been a strong bias towards established technologies that have a lot of available help online. LLMs will remain better at using them, but as long as they are not completely useless on new technologies, they will also help enthusiasts and early adopters work with them and fill in the gaps.

  • I don’t think we will have a lack of people who explore and know beyond others how to things.

    LLMs will make people productive. But it will at the same time elevate those with real skill and passion to create good software. In the meantime there will be some maker confusion, and some engineers who are mediocre might find them selfs in demand like top end engineers. But over the time companies and markets will realize and top dollar will go to those select engineers who know how to do things with and without LLMs.

    Lots of people are afraid of LLMs and think it is the end of the software engineer. It is and it is not. It’s the end of the “CLI engineer” or the “Front end engineer” and all those specializations that were attempt to require less skill to pay less. But the systems engineers who know how computers work, can take all week long describing what happens when you press enter on a keyboard at google.com will only be pressed into higher demand. This is because the single skill “engineer” wont really be a thing.

    tldr; LLMs wont kill software engineering its a reset, it will cull those who chose such a path on a rubric only because it paid well.

  • I've noticed this effect even with well established tech but just in degrees of popularity. I've recently been working on a Swift/SwiftUI project and the experience with LLM's compared to something like web dev stuff with React, etc is noticeably different/worse which I mostly attribute to there probably being at least 20 times less Swift specific content on the web in comparison.

    • There are a ton of Swift /SwiftUI tutorials out there for every new technology.

      The problem is, they’re all blogspam rehashes of the same few WWDC talks. So they all have the same blindspots and limitations, usually very surface level.

  • Is that different from what is happening already? A lot of people won't adopt a language/technology unless it has a huge repository of answers on StackOverflow, mature tooling, and a decent hiring pool.

    I'm not saying you're definitely wrong, but if you think that LLMs are going to bring qualitative change rather than just another thing to consider, then I'm interested in why.

  • New languages / packages / frameworks may need to collaborate with LLM providers to provide good training material. LLM-able training material may be the next important documentation thing.

    Another potentially interesting avenue of research would be to explore allowing LLMs to use "self-play" to explore new things.

    • How can it compete with vast amount of trained codebases on Github? For LLMs, more data equals better results, so people will naturally be driven to better completion with already established frameworks and languages. It would be hard to produce organic data on all ways your technology can be (ab)used.

      2 replies →

  • It’s the same now. I’ve spent arguably too much time trying to avoid Python and it has cost me a whole lot of time. You keep running into bugs and have to implement much more yourself if you go off the beaten path (see also [1]). I don’t regret it since I learned a lot, but it’s definitively not always the easiest path. To this day I wonder whether maybe I should have taken the simple route.

    [1]: https://huijzer.xyz/posts/killer-domain/

  • A showerthought I had recently was that newly-written software may have a perverse incentive to be intentionally buggy such that there will be more public complaints/solutions for said software, which gives LLMs more training data to work with.

The article title is half-true. It wasn't so much they migrated away from Rust, but that they migrated away from Bevy, which is an alpha quality game engine.

I wouldn't have read the article if it'd been labeled that, so kudos to the blog writer, I guess.

  • What are some non-alpha quality Rust game engines? If the answer is "there are none", then I'd say the title is accurate.

  • The problem with Rust is that almost everything is still at an alpha stage. The vast majority of crates are at version 0.x and are eventually abandoned, replaced, or subject to constant breaking changes

    While the language itself is great and stable, the ecosystem is not, and reverting to more conservative options is often the most reasonable choice, especially for long-term projects.

    • I really don’t think Rust is a good match for game dev. Both because of the borrow checker which requires a lot of handles instead of pointers and because compile times are just not great.

      But outside of games the situation looks very different. “Almost everything” is just not at all accurate. There are tons of very stable and productive ecosystems in Rust.

      10 replies →

    • I wouldn't say 'almost everything', but there are some areas which require a huge amount of time and effort to build a mature solution for, UI and game engines being one, where there are still big gaps.

    • > The problem with Rust is that almost everything is still at an alpha stage.

      Replace Rust with Bevy and language with framework, you might have a point. Bevy is still in alpha, it's lacking plenty of things, mainly UI and an easy way to have mods.

      As for almost everything is at an alpha stage, yeah. Welcome to OSS + SemVer. Moving to 1.x makes a critical statement. It's ready for wider use, and now we take backwards compatibility seriously.

      But hurray! Commercial interest won again, and now you have to change engines again, once the Unity Overlords decide to go full Shittification on your poorly paying ass.

      6 replies →

    • I have totally disagree here.

      I don't even look at crate versions but the stuff works, very well. The resulting code is stable, robust and the crates save an inordinate amount of development time. It's like lego for high end, high performance code.

      With Rust and the crates you can build actual, useful stuff very quickly. Hit a bug in a crate or have missing functionality? contribute.

      Software is something that is almost always a work in progress and almost never perfect, and done. It's something you live with. Try any of this in C or C++.

      5 replies →

    • >”reverting to more conservative options”

      From what I’ve heard about the Rust community, you may have made an unintentionally witty pun.

  • They mentioned ABI and the ability to create mods, which are Rust things.

    Here's a thought experiment: Would Minecraft have been as popular if it had been written in Rust instead of Java?

    • I mean, we already have a sort-of answer, because the "Bedrock Edition" of Minecraft is written in C++, and it is indeed less popular on PC (on console, it's the only option, so _overall_ it might win out) and does lack any real modding scene

      3 replies →

It’s incredible how many projects and articles have been written around ECS with very little results.

Quake 1-3 uses a single array of structs, with sometimes unused properties. Is your game more complex than quake 3?

The “ECS” upgrade to that is having an array for each component type but just letting there be gaps:

    transform[eid].position += …
    physics[eid].velocity = …

This comment might not be liked by the usual commenters in these threads, but I think it is worth stressing:

First: I have experience with Bevy and other game engine frameworks; including Unreal. And I consider myself a seasoned Rust, C etc developer.

I could sympathize with what was stated by the author.

I think the issue here is (mainly) Bevy. It is just not even close to the standard yet (if ever). It is hard for any generic game engine to compete with Unity/GoDot. Nevermind, the de facto standard of Unreal.

But if you are a C# developer and using Unity already, and not C++ in Unreal, going to a bloated framework that is missing features that is Bevy makes little sense. [And here is also the minor issue, that if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.]

Now if you are a C++ developer and use Unreal, they only point to move to Rust (which I would argue for the usual reasons) is if Unreal supports Rust. Otherwise, there is nothing that even compares to Unreal. (That is not custom made game engine.)

  • The way I read about Bevy in online discussions obfuscates this. Someone who is new to game development could be confused into thinking Bevy is a fair competitor with the other engines you mentioned. And equate Bevy with Rust, or Bevy with Rust in game dev. I think stomping this out is critical to expectation management, and perhaps rust's future in game dev.

  • As someone who has used Bevy in the past, that was my reading as well. It is an incredible tool, but some of the things mentioned in the article like the gnarly function signature and constant migrations are known issues that stop a lot of people from using it. That's not even to mention the strict ECS requirement if your game doesn't work well around it. Here is a good reddit thread I remember reading about some more difficulties other people had with Bevy:

    https://old.reddit.com/r/rust_gamedev/comments/13wteyb/is_be...

    I wonder how something simpler in the rust world like macroquad[0] would have worked out for them (superpowers from Unity's maturity aside).

    [0] https://macroquad.rs/

  • >if you are a C# developer, honestly you don't care about low level code, or not having a garbage collector.

    You can go low level in C#**, just like Rust can avoid the borrow checker. It's just not a good tradeoff for most code in most games.

    ** value types/unsafe/pointers/stackalloc etc.

    • Structs in C# or F# are not low-level per se, they simply are a choice and used frequently in gamedev. So is stackalloc because using it is just 'var things = (stackalloc Thing[5])' where the type of `things` is Span<Thing>. The keyword is a bit niche but it's very normal to see it in code that cares about avoiding allocations.

      Note that going more hands-on with these is not the same as violating memory safety - C# even has ref and byreflike struct lifetime analysis specifically to ensure this not an issue (https://em-tg.github.io/csborrow/).

      2 replies →

  • Imo the place for rust in game dev isnt in games at all, but base libraries and tools. Writing your proc generation library in rust that is an isolated package you can call in isolation, or similar is where its useful.

    • I agree. [Unless fully adopted by a serious game engine, of course.] Rust's "superpower" is substituting critical C++ code in-place, with the goal of ensuring correctness and soundness. And increasing the development velocity as a result.

Sounds like "Migrating away from Bevy towards Unity"; the Rust to C# transition is mostly a technical consequence.

Bevy: unstable, constantly regressing, with weird APIs here and there, in flux, so LLMs can't handle it well.

Unity: rock-solid, stable, well-known, featureful, LLMs know it well. You ought to choose it if you want to build the game, not hack on the engine, be its internal language C#, Haskell, or PHP. The language is downstream from the need to ship.

Don’t see any content on that article for some reason (from iPhone)

  • I experienced the same, I had to disable my adblocker to view it, it seems the content is inside a tag `<article class="social-sharing">` but I am unsure whether this triggered my adblocker.

  • Adblocking seems to cause issues with the site. Disabling uBlock Origin worked for me as did readability mode in Firefox.

Honey, a new incantation to summon Cthulhu just dropped.

    pub fn randomize_paperdoll<C: Component>(
        mut commands: Commands,
        views: Query<(Entity, &Id<Spine>, &Id<Paperdoll>, &View<C>), Added<SkeletonController>>,
        models: Query<&Model<C>, Without<Paperdoll>>,
        attachment_assets: Res<AttachmentAssets>,
        spine_manifest: Res<SpineManifest>,
        slot_manifest: Res<SlotManifest>,
    ) {