Comment by Animats

9 months ago

That's a good article. He's right about many things.

I've been writing a metaverse client in Rust for several years now. Works with Second Life and Open Simulator servers. Here's some video.[1] It's about 45,000 lines of safe Rust.

Notes:

* There are very few people doing serious 3D game work in Rust. There's Veloren, and my stuff, and maybe a few others. No big, popular titles. I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

* He's right about the pain of refactoring and the difficulties of interconnecting different parts of the program. It's quite common for some change to require extensive plumbing work. If the client that talks to the servers needs to talk to the 2D GUI, it has to queue an event.

* The rendering situation is almost adequate, but the stack isn't finished and reliable yet. The 2D GUI systems are weak and require too much code per dialog box.

* I tend to agree about the "async contamination" problem. The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests. I've been pushing back against it creeping into areas that don't really need it.

* I have less trouble with compile times than he does, because the metaverse client has no built-in "gameplay". A metaverse client is more like a 3D web browser than a game. All the objects and their behaviors come from the server. I can edit my part of the world from inside the live world. If the color or behavior or model of something needs to be changed, that's not something that requires a client recompile.

The people using C# and Unity on the same problem are making much faster progress.

[1] https://video.hardlimit.com/w/7usCE3v2RrWK6nuoSr4NHJ

> I'd expected some AAA title to be written in Rust by now.

I'm disinclined to believe that any AAA game will be written in Rust (one is free to insert "because Rust's gamedev ecosystem is immature" or "because AAA game development is increasingly conservative and risk-averse" at their discretion), yet I'm curious what led you to believe this. C++ became available in 1985, and didn't become popular for gamedev until the turn of the millenium, in the wake of Quake 3 (buoyed by the new features of C++98).

  • Lamothe's Black Art book came out in '95. Abrash's black book came out in '97.

    Borland C++ was pretty common and popular in 93 and we even had some not-so-great C++ compilers on Amiga in 92/93 that had some use in gamedev.

    SimCity 2000 was written in C++, way back in '93 (although they started with Cfront)

    An absolute fuckton of shareware games I was playing in the 90s were built with Turbo C++.

    • Kind of true, however they had endless amounts of inline Assembly, as shown on the Black Book as well.

      I know of at least a MS-DOS game, published on Portuguese Spooler magazine, that was using Turbo C++ basically as a macro assembler.

      One of the PlayStation selling points for developers was being the first home console with a C SDK, while SEGA and Nintendo were still doing Assembly, C++ support only came later to the PlayStation 2.

      While I agree C++, BASIC, Turbo Pascal, AMOS were being used a lot, specially in the Demoscene, they were our Unity, from the point of view of successful game studios.

    • I also remember by videogame magazines I was reading back in early 90s that another C++ compiler that was a favourite among devs was Watcom C++ that was released in 88.

      4 replies →

  • I really hope that C++ evolves with gamedev and they become more and more symbiotic.

    Maybe adoption of rust by gamedev community isn't the best thing to wish to happen to language. Maybe it is better to let other crowd to steer evolution of rust, letting system programming and gamedev drift apart

    • I think I don't know a single gamdev who's fond of "modern C++" or even the C++ stdlib in general (and stdlib changes is what most of "modern C++" is about). the last good version was basically C++11. In general the C++ committee seems to be largely disconnected from reality (especially now that Google seems to be doing its own C++ successor, but even before, Google's requirements are entirely different from gamedev requirements).

      16 replies →

  • I sometimes wonder if the problem with rust is that we have not yet had a major set of projects which drive solutions to common dev problems.

    Go had google driving adoption, which in turn drove open source efforts. The language had to remain grounded to not interfere with the doing of building back-end services.

    Rust had mozilla/servo which was ultimately unsuccessful. While there are more than a few companies uinf rust for small projects with tough performance guarantees - I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.

    • Microsoft is rewriting quite a bit of their C# to Rust for performance reasons. Especially within their business line products. Rust have also become rather massive in the underlying tech in the telecommunications infra structure in several countries.

      So I’m not sure that your take is really so on point. Especially as far as comparing it with Go goes (heehee), at least not in terms of 3rd party libraries where most of the Go ecosystems seems to be either maintained by one or two people or abandoned as those two people got new jobs. I think Go is cool by the way, but there is a massive difference in the maturity of the sort of libraries we looked into using during our PoCs.

      Anyway. A lot of Rust adoption is a little quiet, and well, rather boring. So maybe that’s why you don’t hear too much about it.

      6 replies →

    • I really think the problem of Rust is the borrow checker. Seriously. It is good but it is overkill. You have to do and plan all things around it and discourages a lot of patterns or makes them really difficult to refactor.

      I would encourage people to understand Hylo's object model and mutable value semantics. I thinks something like that is far better, more ergonomic and very well-performing (in theory at least).

      12 replies →

    • > Go had google driving adoption

      This is commonly said but I think it's only correct in the sense that Google is famous and Google engineers started it.

      Google never drove adoption; it happened organically.

    • > Rust had mozilla/servo which was ultimately unsuccessful.

      There's lots of Rust code in Firefox!

      > I haven't seen the “we manage 1-10 MM sloc of complex code using rust” type projects.

      Meta has a lot of Rust internally.

      The problems with Rust for high-level indie game dev logic, where you're doing fast prototyping, are very specific to that domain, and say very little about its applicability in other areas.

  • Exactly, it's all about the ecosystem and very little about the language features

    • Kind of both in my opinion. But rust is bringing nothing to the table that games need.

      At best rust fixes crash bugs and not the usual logic and rendering bugs that are far more involved and plague users more often.

      8 replies →

  • > and didn't become popular for gamedev until the turn of the millenium

    Wasn't this also because Microsoft had terrible support for C?

    Since the mid-90's, a number of gamedevs moved to C++ but were unhappy with the results.. how OOP works, exception handling, the STL, etc.

    My understanding is.. by late 90's.. many game developers, despite using C++, we still coding more inline with C programming than (proper) C++.

    Mostly C code but using some features of C++ like, functions inside a struct, or using namespaces, that did not sacrifice compilation and runtime speed.

  • Yeah, gaming industry has become mature enough to build up its own inertia so it will take some time for new technologies to take off. C# has become a mainstream gamedev language thanks to Unity, but this also took more than a decade.

  • Comparing the time it takes for a prog language to spread from the 80s to today is a bad vantage point. Stuff took much longer to bake back then -- but even so the point is moot, as other commentors pointed out, it took off roughly the same amount of time between 2015 and today.

    • Hmm I don't agree. We're far away from the frantic hardware and software progress in the 80s and 90s. Especially in software development it feels like we're running in circles (but very, very fast!) since the early 2000's, and things that took just a few months or at most 2..3 years in the 80s or 90s to mature take a decade or more now.

  • The concept of AAA games didn't even exist back in 1985, very few people were developing games at that era, and even fewer were writing "complex" games that would need C++.

    The SNES came on 1990 and even then it had it's own architecture and most games were written in pure assembly. The PlayStation had a MIPS CPU and was one of the first to popularize 3D graphics, the biggest complexity leap.

    I believe your are seeing causation were only correlation should be given. C++ and more complex OOP languages just joined the scene when the games themselves became complex, because of hardware and market natural evolution

  • Many tried c++ in early 90s, but wasnt it too slow/memory intensive? You had to implement lots of inline c/assembly to have a bit of performance. Nowadays everything is heavily optimized, but back then not.

    • If you’re referring to game dev specifically, there have been (and continue to be) concerns around the weight of C++ exception handling, which is deeply-embedded in the STL. This proliferated in libraries like the EASTL. C++ itself however is intended to have as many zero-cost abstractions as possible/reasonable.

      The cost of exception handling is less of a concern these days though.

      1 reply →

  • Seems like a few contradictory ideas here. Rust is supposed to be a better safer C/C++.

    Then lot of comments here that games are best done in C++.

    So why can't Rust be used for games?

    What is really missing beyond an improved ecosystem of tools. All also built on Rust.

> I'd expected some AAA title to be written in Rust by now.

Why? Those kinds of game engines are enormous amounts of code, and there's little incentive to rewrite.

I do strongly disagree that we aren't ever going to see large-scale game development in Rust; it just takes time. Whether games adopt an engine is largely about that engine's maturity rather than anything about the language. Bevy is quite young; 0.13 doesn't even have support for animation blending yet (I landed that for 0.14).

  • It was a few years back that the question came up to the developers of a Call of Duty title. "Is there still code from Quake 3 in COD?". They dodge around it by saying something like "we cannot deny this but e use the most appropriate tech where needed".

    While not confirmation, I wouldn't be surprised if there is a few nuggets of Q3 in that code base still doing some of the basics. That would be really cool if it is true.

    It seems like unless you are someone like John Carmack or most of Nintendo, game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.

    • A neat real-world example of ancient Quake code surviving to this day is visible in Valves games - the hardcoded patterns for flickering lights in Quake 1 survived into GoldSrc and then into Source and then into Source 2, most recently showing up in Half Life Alyx, 24 years on from their original appearance in Quake 1.

      https://www.alanzucconi.com/2021/06/15/valve-flickering-ligh...

      Basically all of the bigger systems will have been Ship-of-Theseus'd several times over by now, but little things like that can slip through the cracks.

      1 reply →

    • > game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.

      Bingo. Rust's biggest strength is correctness. But games aren't mission critical, and gamers are very tolerant towards bugs (maybe not on social media, but very few buggy games have had their sales impacted). Your biggest sale to AAA game devs are to engine programmers to minimize tech debt. But as we are seeing with the current industry, that's not exactly something companies care about until it's too late.

      Then on the indie level we get articles like this. Half the article ultimately came down to "it's faster to break things and iterate than to do it right once". Again, similar lack of need for bug-free games. In addition, few indie games are scoped to a point where they need a highly disciplined ECS solution to scale with.

      The author even criticizes the "tech specs" community part of rust gamedev. Different tools, diferent goals, different needs. IMO, I think Rust will help make for some very robust renderers one day, but ultimaely the scripting will be done on another language. Similar to how Unity uses C# scripting to a C++ engine, that they IL2CPP to bring back to a full C++ game.

      1 reply →

    • If that's the question... Let me assure you that there are decades-old pieces of code inside of, and used to assemble, many modern AAA games coming out of mature studios. The systems and tooling is typically carried forward. I don't think this is some big secret and you've intuited exactly the reason why:

      > game dev tools are about what can get the best results quickest rather than any sort of technical specifics. It is a business after all.

      1 reply →

    • A lot of big projects have amazing longevity to their older architectural decisions. Unreal still has a lot of stuff in it people that used UE1 would recognize, I did most of my professional development on UE3 and a bunch of that is still pretty recognizable. Similarly Chrome is a product of the time it was first created. And looking into the Windows source is probably like staring into the stygian abyss.

      There is a lot of legacy and tech debt out there!

      3 replies →

"I tend to agree about the "async contamination" problem. The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests. I've been pushing back against it creeping into areas that don't really need it."

100% this. As I say elsewhere in these threads: Rust is the language that Tokio ate. It isn't even just async viral-chain-effect, it's that on the whole crates for one async runtime are not even compatible with those of another, and so it's all really just about tokio.

Which sucks, if you're doing, y'know, systems programming or embedded (or games). Because tokio has no business in those domains.

  • It does in my domain of systems programming with async data handling. Tokio works like a dream - slipping into the background and just working so I can concentrate on the business logic.

    • This seems strange to me. If you don't have millions of concurrent requests to handle at the same time, why would you bother with a whole async framework? Just straight up spawning OS threads to do parallel work when you need it is both easier to reason about and does not mess with your program's stack.

      Isn't the point of async/await that spawning OS threads is not scalable when you reach ridiculous numbers of simultaneous blocking I/O? It doesn't sound like you're really dealing with this sort of problem.

    • I know this is a late reply to your post, but your wording prompted a question. I will preface by saying this is not some sort of semantic flamebait, it is also not supposed to be a gatekeeping exercise. You state your domain is systems programming, but then talk about the event loop and scheduler for your program as ancillary details and say that your concentration is on business logic. I tend to view systems programming as development of things that have no business logic, because that is the domain of application programming. Also, I tend to think that a defining feature of systems programming is development that can not just accept a default solution to something as impactful as an event loop/scheduler/executer, but have to focus deeply on those aspects of a program that are the crux of its actual computational operation and interactions between those parts.

      In the context of games, the systems programming is the renderer, audio engine, physics calculations, and things like a task system and dispatcher/scheduler, etc. As compared to the actual application specifics of levels, art, dialogue, interactions, UI, etc which to me are not systems programming.

      With that said, how do you define systems programming? I’m really interested in how various devs tend to view the ‘cut-off’ between systems and application development. Sometimes I’m pretty sure I am on the extreme end of disjointness of the two and non-accepting of any ‘business logic’ type development qualifying as systems programming.

      TL;DR - What is your definition of systems programming and do you include things like ‘business logic’ within that definition?

      1 reply →

> The "async" system is optimized for someone who needs to run a very large web server,

Even there it's very problematic at scale unless you know what you're doing. async/await isn't zero cost, regardless of what people will tell you.

  • Absolutely. Async/await typically improves headroom (scalability) at the cost of latency and throughput. It may also make code easier to reason about.

    • I disagree with this, you're probably not paying much (if at all) in latency or throughput for better scaling.

      What you're paying for with async/await is a state machine that describes the concurrent task, but that state machine can be incredibly wasteful in size due to the design of futures and the desugaring pass that converts async/await into the state machine.

      That's why I said it's not "zero cost" in the loosest definition of the phrase - you can write a better implementation by hand.

      1 reply →

> I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

The main reason is that you can't ship that Rust code on PS5 in a sensible manner. People have tried, got useless toys to compile, but in the end even Embark gave up. I remember seeing something from them that they had moved Rust to server-only.

> I tend to agree about the "async contamination" problem.

Argh I have the same issue. Sure if you write JS or Python you probably need async. My current Java back end that has like 5 concurrent users does not need async everything making 10x the complexity.

> I'd expected some AAA titles to be written in Rust by now.

"AAA" titles are huge and/or high dev budgets. Even if a game is "starting from scratch" the engine development team are still likely taking code from previous projects to get started. Of course there are other factors. It could be a BIG RISK to move to another programming language when the team, despite frustrations, are already familiar with something else... like the perks C++ brings (you learn from trial-and-error)

Could you imagine learning Rust as-you-go... building a AAA title... and fighting the compiler? To me it is a huge risk!

That is my opinion.. but I am sure others will disagree. If there is anyone on (or did) a AAA title with Rust... I would be happy to hear more about it.

I am not saying it will never happen. Maybe a AAA title is currently in development in Rust. I honestly dont know. However, game developers... if they are looking into Rust... are also looking at Odin, Jai, or Zig. For gaming, I think they are better alternatives than Rust but (again) that is my opinion.

Now for smaller, indie games - the possibility of moving to Rust (or another language) is more likely. Likely a fair percentage have moved away from C++ now.

> * There are very few people doing serious 3D game work in Rust. There's Veloren, and my stuff, and maybe a few others. No big, popular titles. I'd expected some AAA title to be written in Rust by now. That hasn't happened, and it's probably not going to happen, for the reasons the author gives.

At one point the studio behind the Finals was writing game server code in Rust with an Unreal engine client. Not sure if that's true still

> The "async" system is optimized for someone who needs to run a very large web server, with a huge number of clients sending in requests.

Can you please elaborate on this? I see a lot of similar concerns in other contexts too. Linux kernel's scheduler for example. Is it a throughput/latency tradeoff?

  • The current popularity of the async stuff has its roots in the classic "c10k" problem. (https://en.wikipedia.org/wiki/C10k_problem)

    A perception among some that threads are expensive, especially when "wasted" on blocking I/O. And that using them in that domain "won't scale."

    Putting aside that not all of use are building web applications (heterodox here in HN, I know)...

    Most people in the real world with real applications will not hit the limits of what is possible and efficient and totally fine with thread-based architectures.

    Plus the kernel has gotten more efficient with threads over the years.

    Plus hardware has gotten way better, and better at handling concurrent access.

    Plus async involves other trade-offs -- running a state machine behind the scenes that's doing the kinds of context switching the kernel & hardware already potentially does for threads, but in user space. If you ever pull up a debugger and step through an async Rust/tokio codebase, you'll get a good sense for what the overhead here we're talking about is.

    That overhead is fine if you're sitting there blocking on your database server, or some HTTP socket, or some filesystem.

    It's ... probably... not what you want if you're building a game or an operating system or an embedded device of some kind.

    An additional problem with async in Rust right now is that it involves bringing in an async runtime, and giving it control over execution of async functions... but various things like thread spawning, channels, async locks, etc. are not standardized, and are specific per runtime. Which in the real world is always tokio.

    So some piece of code you bring in in a crate, uses async, now you're having to fire up a tokio runtime. Even though you were potentially not building something that has anything to do with the kinds of things that tokio is targeted for ("scalable" network services.)

    So even if you find an async runtime that's optimized in some other domain, etc (like glommio or smol or whatever) -- you're unlikely to even be able to use it with whatever famous upstream crate you want, which will have explicit dependencies into tokio.

    • > If you ever pull up a debugger and step through an async Rust/tokio codebase, you'll get a good sense for what the overhead here we're talking about is.

      So I didn't quite do that, but the overhead was interesting to me anyway, and as I was unable to find existing benchmarks (surely they exist?), I instructed computer to create one for me: https://github.com/eras/RustTokioBenchmark

      On this wee laptop the numbers are 532 vs 6381 cpu cycles when sending a message (one way) from one async thread to another (tokio) or one kernel thread to another (std::mpsc), when limited to one CPU. (It's limited to one CPU as rdtscp numbers are not comparable between different CPUs; I suppose pinning both threads to their own CPUs and actually measuring end-to-end delay would solve that, but this is what I have now.)

      So this was eye-opening to me, as I expected tokio to be even faster! But still, it's 10x as fast as the thread-based method.. Straight up callback would still be a lot faster, of course, but it will affect the way you structure your code.

      Improvements to methodology accepted via pull requests :).

      4 replies →

    • > Putting aside that not all of use are building web applications

      Perfect moment to mention "rouille" which is a very lightweight synchronous web server framework. So even when you decide to build some web application you do not necessarily have to go down the tokio/async route. I have been using it for a while at work and for private projects and it turned out to be pretty eye-opening.

    • Hit the nail on the head.

      Unless you're really dealing with absurd numbers of simultaneous blocking I/O, async has entirely too many drawbacks.

    • >now you're having to fire up a tokio runtime

      I've been developing in (mostly async) Rust professionally for a about a year -- I haven't written much sync rust other than my learning projects and a raytracer I'm working on, but what are the kind of common dependencies that pose this problem? Like wanting to use reqwest or things like that?

      4 replies →

I'm happy to see someone still doing some work in second life.

  • There's a lot going on. Someone is doing a new third party viewer, Crystal Frost, in Unity. Linden Lab has a mobile viewer in alpha test. Rendering is PBR now for new objects. There are mirrors! Content upload is moving to glTF, to be compatible with everybody else. Voice is switching from Vivox to WebRTC. Game controller support is in test. New users get better avatars. The dev staff is larger.

    None of this is yet increasing Second Life usership much, but it remains the best metaverse around.

    I thought the metaverse thing was going to be bigger. Meta spent so much money to produce so little.

    • > There's a lot going on.

      I'd like to use the opportunity to ask: What happened during the covid pandemic? I haven't heard/read anything about second life during the pandemic even though this was probably a once-in-a-lifetime opportunity?

      Are there any news sources that you can recommend to keep an eye on second life, because it doesn't seem that it gets that much press coverage?

      1 reply →