Comment by drogus

9 months ago

Would "you're going to be less productive in Rust than nearly any other language unless GC time or any bug are dealbreakers" be a fair summary of what you mean?

Either way, I fully disagree with that. Many more traits of Rust may make it a better choice even if the low productivity claim was true:

- integration with other languages - I know of companies successfully developing a single Rust library and just using thin wrappers for other languages they need support for

- data races detected at compile time - in highly concurrent applications being able to catch data races at compile time is huge. Please take a look at a blog post from the Uber team[1]. A dedicated team investigated 1100 data race occurrences. Data races may lead to bugs that are a PR nightmare for companies, like a bug in GitHub that sometimes resulted in a user being logged in to an account of another user[2].

- Embedded systems

- WASM - there are not that many languages that natively compile to WASM and have good tooling around it. For most GCed languages you have to go for "close enough" alternatives like TinyGo or AssemblyScript or use tools that bundle an entire interpreter in a WASM binary

But even outside these categories, I don't think it's universally true Rust is less productive than alternatives and my experience shows me otherwise. For example, in many domains, you don't care about the borrow checker and lifetimes almost at all. Take a look at a Todo Backend[3] I wrote in Rust[4]. If you take a look at one of the Go implementations of the same thing, you wouldn't probably see much of a difference because of the nature of web backends: you get some data in, you process the data, usually making some database queries, you return some data (or not).

What with stateful applications without a database, though? Surely that must be hell? Even here it's not as black and white as you would like to see it. When I was working at Hopin (once upon a time a unicorn startup scaling extremely fast) we had to implement a presence server - a service holding information on who is online and what event they're attending, which video they're watching etc. Nothing too complex, but we had a requirement to hold up to 100k open connections, and at the time we didn't have any infrastructure for that (most of the stack was Node.js and Rails). Someone wrote a proof of concept in Go using Redis as a backend with a queue and using Redis for leader election with a big caveat - each of the nodes had to process all of the queue items, so we were limited by a size and processing speed of a single Redis node.

When the time came to implement the production version I said: let's treat the application as a database. We cared only about current data. If the application failed, we could restart and clients would reconnect. If we wanted to have a history of presence we could push all of the events to Kafka or another queue, but still mostly use in-memory data for real-time needs.

I had some Rust exposure before, but it was my first production app. I was also joined by a person who had never written Rust before. In two weeks we had a working application while I was also making sure the other programmer codes as much as possible and doing a lot of pair programming. We deployed it shortly after. Then we added a few more features in the next two weeks or so.

The code was extremely simple - more or less a few hashes behind a WebSocket based API. As all of the data was living through the entire lifetime of the application we didn't have to care about borrow checker or lifetimes. We had an actore-like code - a few threads with each thread holding a data structure and a few channels that send commands. We were moved to other projects, so the presence server became unmaintained and even then it was working without any issues whatsoever for the next half a year or so. Then there was a big push to scale all of the services to handle a minimum 500k concurrent users, ideally a million. The Rust app didn't need almost any changes, after some kernel and load balancer tune up, it could handle up to 2 million connections frequently sending events on a single machine. If we wanted to, we could easily shard it, but there was no need.

The push to go more into real-time features was deprioritized by then, though, so the management said the app has to be rewritten to Node.js. There was one try to do that, which failed after two months or so. This is not to say you can't make an application like that in Node.js. You can, but you can't use the same architecture, cause you can't multithread Node.js applications, thus you have to run multiple processes, thus you have to have some kind of a database or a queue or a service you use (at the time they tried using one of the Pusher-like services, cause they didn't want to handle WebSocket connections themselves).

But even outside of specific examples like that - in my experience, I don't feel less productive in Rust when it comes to writing production-level applications, not necessarily critical or with wild performance needs. It's subjective, of course, but I agree with @pcwalton - if Rust was universally not productive, I don't believe so many companies would be using it.

One last thing to consider is the expressiveness of the language. In many languages, like Go, it's hard to make certain abstractions that are not a burden to use. Even after they introduced generics, most of the ecosystem is still using `interface {}` all over the place and projects like Kubernetes implement their own dynamic runtime type system. Recently I've been working on a load-testing tool running scenarios as WASM binaries called Crows[5] and one of the abstractions I've created is an RPC client that can send requests in both directions. At the code level, you use it like many RPC libraries in higher-level languages. You defined your interface [6] and then you can call it like it was a regular local method[7] which is huge when developing code, especially in an editor with LSP, cause it will show you what methods you can call and what arguments they take. What's more any typo would be caught at compile time as the server and the client share the same interface. In Go even official RPC client is like `client.Call("TimeServer.GiveServerTime", args, &reply)`, which can't be type checked as far as I know. I think the ability to create these kinds of APIs that are preventing you from doing the wrong thing is a huge advantage of the language.

  1. https://www.uber.com/en-DE/blog/data-race-patterns-in-go/
  2. https://github.blog/2021-03-08-github-security-update-a-bug-related-to-handling-of-authenticated-sessions/
  3. https://todobackend.com/
  4. https://github.com/drogus/todo-backend/blob/main/src/main.rs#L138-L151
  5. https://github.com/drogus/crows
  6. https://github.com/drogus/crows/blob/8eac9c9dfb3df3e5f329b5ba1ee85d37bceb6dc2/utils/src/services/mod.rs#L94-L105
  7. https://github.com/drogus/crows/blob/8eac9c9dfb3df3e5f329b5ba1ee85d37bceb6dc2/coordinator/src/main.rs#L80