Comment by pron

5 hours ago

> C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.

Come on. The majority of Rust programs use the GC. I don't understand why it's important to you to debate this obvious point. Rust has a GC and most Rust programs use it (albeit to a much lesser extent than Java/Python/Go etc.). I don't understand why it's a big deal.

You want to add the caveat that some Rust programs don't use the GC and it's even possible to not use the standard library at all? Fine.

> Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.

This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades and is continuing to do so.

> As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.

What you say about RAM prices is true, but it still doesn't change the economics of RAM/CPU sufficiently. There is a direct correspondence between how much extra RAM a tracing collector needs and the amount of available CPU (through the allocation rate). Regardless of how memory management is done (even manually), reducing footprint requires using more CPU, so the question isn't "is RAM expensive?" but "what is the relative cost of RAM and CPU when I can exchange one for the other?" The RAM/CPU ratios available in virtually all on-prem or cloud offerings are favourable to tracing algorithms.

If you're interested in the subject, here's an interesting keynote from the last International Symposium on Memory Management (ISMM): https://youtu.be/mLNFVNXbw7I

> Sure, but those that see fewer UAF errors have more time to deal with logic errors.

I think that's a valid argument, but so is mine. If we knew the best path to software correctness, we'd all be doing it.

> Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.

I understand that's something you believe, but it's not supported empirically, and as someone who's been deep in the software correctness and formal verification world for many, many years, I can tell you that it's clear we don't know what the "right" approach is (or even that there is one right approach) and that very little is obvious. Things that we thought were obvious turned out to be wrong.

It's certainly reasonable to believe that the Rust approach leads to more correctness than the Zig approach, and some believe that, and it's equally reasonable to believe that the Zig approach leads to more correctness than the Rust approach, and some people believe that. It's also reasonable to believe that a different approaches is better for correctness in different circumstances. We just don't know, and there are reasonable justifications in both directions. So until we know, different people will make different choices, based on their own good reasons, and maybe at some point in the future we'll be able to have some empirical data that gives us something more grounded in fact.