Comment by pron
25 days ago
> C also has a GC. See Boehm GC. And before you complain RC is part of std I will point that std is optional and is on track to become a freestanding library.
Come on. The majority of Rust programs use the GC. I don't understand why it's important to you to debate this obvious point. Rust has a GC and most Rust programs use it (albeit to a much lesser extent than Java/Python/Go etc.). I don't understand why it's a big deal.
You want to add the caveat that some Rust programs don't use the GC and it's even possible to not use the standard library at all? Fine.
> Not the way hardware is moving, which is to say more emphasis on more cores and with no more free lunch from hardware. Regardless of whether it is on-prem or in the cloud, mandatory GC is not a cost you can justify easily anymore.
This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades and is continuing to do so.
> As witnessed in the latest RAM crisis, there is no guarantee you can just rely on more memory providing benefits.
What you say about RAM prices is true, but it still doesn't change the economics of RAM/CPU sufficiently. There is a direct correspondence between how much extra RAM a tracing collector needs and the amount of available CPU (through the allocation rate). Regardless of how memory management is done (even manually), reducing footprint requires using more CPU, so the question isn't "is RAM expensive?" but "what is the relative cost of RAM and CPU when I can exchange one for the other?" The RAM/CPU ratios available in virtually all on-prem or cloud offerings are favourable to tracing algorithms.
If you're interested in the subject, here's an interesting keynote from the last International Symposium on Memory Management (ISMM): https://youtu.be/mLNFVNXbw7I
> Sure, but those that see fewer UAF errors have more time to deal with logic errors.
I think that's a valid argument, but so is mine. If we knew the best path to software correctness, we'd all be doing it.
> Of course there are confounding variables such as believing you are king of the world, or that Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
I understand that's something you believe, but it's not supported empirically, and as someone who's been deep in the software correctness and formal verification world for many, many years, I can tell you that it's clear we don't know what the "right" approach is (or even that there is one right approach) and that very little is obvious. Things that we thought were obvious turned out to be wrong.
It's certainly reasonable to believe that the Rust approach leads to more correctness than the Zig approach, and some believe that, and it's equally reasonable to believe that the Zig approach leads to more correctness than the Rust approach, and some people believe that. It's also reasonable to believe that a different approaches is better for correctness in different circumstances. We just don't know, and there are reasonable justifications in both directions. So until we know, different people will make different choices, based on their own good reasons, and maybe at some point in the future we'll be able to have some empirical data that gives us something more grounded in fact.
> Come on. The majority of Rust programs use the GC.
This part is false. You make a ridiculous statement and expect everyone to just nod along.
I could see this being true iff you say all Rust UI programs use "RC".
> This is simply not true. There are and have always been types of software that, for whatever reason, need low-level control over memory usage, but the overall number of such cases has been steadily decreasing over the past decades
Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).
GC will be a mostly unacceptable overhead in numerous instances. I'm not saying it will be fully gone, but I don't think the current crop of C-likes is accidental either.
> I understand that's something you believe, but it's not supported empirically
It's supported by Google's usage of Rust.
https://security.googleblog.com/2025/11/rust-in-android-move...
> Stable and high-quality changes differentiate Rust. DORA uses rollback rate for evaluating change stability. Rust's rollback rate is very low and continues to decrease, even as its adoption in Android surpasses C++.
So for similar patches, you see fewer errors in new code. And the overall error rate still favors Rust.
> Without ever increasing memory/CPU, you're going to have to squeeze more performance out of the stone (more or less unchanging memory/CPUs).
The memory overhead of a moving collector is related only to the allocation rate. If the memory/CPU is sufficient to cover that overhead, which, in turn help save more costly CPU, it doesn't matter if the relative cost reduced (also, it's not even reduced; you're simply speculating that one day it could be).
> I'm not saying it will be fully gone
That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years with no reversal in trend. This is like saying that more people will find the cost of typing a text message unacceptable so we'll see a rise in voicemail messages, but of course text messaging will not be fully gone.
Even embedded software is increasingly written in languages that rely heavily on GC. Now, I don't know the future market forces, and maybe we won't be using any programming languages at all but LLMs will be outputting machine code directly, but I find it strange to predict with such certainty that the trend we've been seeing for so long will reverse in such full force. But ok, who knows. I can't prove that the future you're predicting is not possible.
> It's supported by Google's usage of Rust.
There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing, and therefore in the total reduction of bugs, and you said that maybe a complex language like Rust, with lots of implicitness but also temporal memory safety could perhaps have a positive effect on other bugs, too, in comparison. What you linked to is something about Rust vs C and C++. Zig is at least as different from either one as it is from Rust.
> And the overall error rate still favors Rust.
Compared to C++. What does it have to do with anything we were talking about?
> That's a strange expression given that the percentage of programs written in languages that rely primarily on a GC for memory management has been rising steadily for about 30 years
I wish I knew what you mean by programs relying primarily on GC. Does that include Rust?
Regardless, but extrapolating current PL trends so far is a fools errand. I'm not looking at current social/market trends but limits of physics and hardware.
> There's nothing related here. We were talking about how Zig's design could assist in code reviews and testing
No, let me remind you:
> > [snip] Rust defends you from common mistakes, but overall for similar codebases you see fewer bugs.
> I understand that's something you believe, but it's not supported empirically we were talking how not having to worry about UB allows for easier defect catching.
> Compared to C++.
Overall, I think using C++ with all of its modern features should be in the ballpark of safe/fast as Zig, with Zig having a better compile time. Even if it isn't a 1-to-1 comparison with Zig, we have other examples like Bun vs Deno, where Bun incurs more segfaults (per issue).
Also don't see how much of Zig design could really assist code reviews and testing.
3 replies →