Comment by simonask
6 days ago
If that's all you need, the state of the art is very available already through the JVM and the .NET CLR, as well as a handful others depending on your use case. Most of those also come with decent languages, and great facilities to leverage the GC to its maximum.
But GCs aren't magic and you will never get rid of all the overhead. Even if the CPU time is not noticeable in your use case, the memory usage fundamentally needs to be at least 2-4x the actual working set of your program for GCs to be efficient. That's fine for a lot of use cases, especially when RAM isn't scarce.
Most people who use C or C++ or Rust have already made this calculation and deemed the cost to be something they don't want to take on.
That's not to say Fil-C isn't impressive, but it fills a very particular niche. In short, if you're bothering with a GC anyway, why wouldn't you also choose a better language than C or C++?
I don't understand the need to hammer in the point that Fil-C is only valuable for this tiny, teeny, irrelevant microscopic niche, while not even talking about what the niche is? To be clear, the niche is rebuilding your entire GNU/Linux userland with full memory safety and completely acceptable performance, tomorrow, without rewriting anything, right? Is this such a silly little idiosyncratic hobby?
So I don’t want to come off as dismissive of the effort - it’s certainly impressive!
The reason I’m not super excited is based on the widely publicized findings from Google and Microsoft (IIRC) about memory safety issues in their code: The vast majority is in new code.
As such, the returns on running the entire userspace with Fil-C may be quite diminished from the get-go. Those who need to guard against UB bugs in seriously battle-hardened C software in production are definitely a small niche.
But that doesn’t mean it isn’t also very useful as a tool during development.
Hmm, so if they're writing new memory unsafe code in C/C++, presumably to remain within their already established and entrenched C/C++ ecosystems, why isn't Fil-C interesting as a way to thwart memory safety issues in that new code?
2 replies →
It seems like there are constant updates for 20 year old packages on my Ubuntu systems. Ubuntu 20.04 Focal Fossa (first released April 2020) glibc had an update on 2025-05-28. Current stable updated glibc 2025-09-22. To say nothing about the rest of the packages in that operating system.
1 reply →
> The reason I’m not super excited is based on the widely publicized findings from Google and Microsoft (IIRC) about memory safety issues in their code: The vast majority is in new code
This makes perfect sense to me.
Which is why I don't at all understand the current fetish with rewriting things that have been working well for decades in Rust. Such as coreutils. Or apt.
It feels like an almost deliberate crippling of progress by diverting top talent into useless avenues, much like string theory in physics, or SLS/Artemis.
2 replies →
There's a contingent of rust fans that show up on every story about C – their premise is that C code is unsafe and most safety-critical C code should be rewritten in rust.
Fil-C is new and is a viable competitor to rust, that's why you're hearing all asides about tiny niches, unacceptable performance degradation, etc.
Hacker News is not a place where any one group brigrades a thread. There are people who prefer C who don't want a GC, people who prefer Rust who don't want C, people who prefer Rust who agree with Fil-C for legacy C, people who don't prefer C or Rust and may use languages with GC.... We all have interests and face people who denigrate them in bad faith. If you have specific objections to inaccurate statements in this thread, then state them. I'll do the same for any technology if I'm qualified to make statements on it.
2 replies →
> Fil-C is new and is a viable competitor to rust
I’ve no horse in the race here, but the Fil-C page talks about a 4x overhead from using it, which feels like it would make it less competitive
4 replies →
There’s no Rust fans here, only GC skeptics. GC skeptics existed long before anyone dreamed of Rust and will survive Rust as well.
It’s a pretty reasonable objection too (though I personally don’t agree). C has always been chosen when performance is paramount. For people who prioritise performance it must feel a bit weird to leave performance on the table in this way.
And Jesus Christ, give it a rest with this “Rust fans must be thinking” stuff. It sounds deranged.
4 replies →
I am a member of this niche – thank you for the flake!! https://discourse.nixos.org/t/radically-improving-nix-nixos-...
I think Fil-C is for people who are using software that has already been written, not for people who are trying to pick what language to write new software in. A substantial amount of software has, after all, already been written.
It's super fun to write C and C++ code in Fil-C because it's like this otherworldly crossover between Java and C/C++:
- Unlike Java, you get fantastic startup times.
- Unlike Java, you get access to actual syscall APIs.
- Unlike Java, you can leverage the ecosystem of C/C++ libraries without having to write JNI wrappers (though you do have to be able to compile those libraries with Fil-C).
- Like Java, you can just `new` or `malloc` without `delete`ing or `free`ing.
It's so fun!
I like C, have a probably unhealthy relationship with C++ where I am amazed by what it can do and then get unrealistic expectations it keeps failing to fulfill, and don't really like Java.
You know Julia Ecklar's song where she says that programming in assembler is like construction work with a toothpick for a tool? I feel like C, C++, or Java are like having a teaspoon instead. Maybe Java is a tablespoon. I'd rather use something like OCaml or a sane version of Python without the Mean Girls community infighting. I just haven't found it.
On the other hand, the supposedly more powerful languages don't have a great record of shipping highly usable production software. There's no Lisp or Ruby or Lua alternative to Firefox, Linux, or LLVM.
> Like Java, you can just `new` or `malloc` without `delete`ing or `free`ing.
Is your intention that people use the Fil-C garbage collector instead of free()? Or is it just a backstop in case of memory leak bugs?
Can the GC be configured to warn or panic if something is GCed without free()? Then you could detect memory leak bugs by recompiling with Fil-C - with less overhead than valgrind, although I’m guessing still more than ASan - but more choices is always a good thing.
10 replies →
It seems kind of analogous to disabling hyperthreading. Sure there's an immediate performance hit, but in exchange you're now protected from entire classes of vulnerabilities. A few years later, no one remembers or cares about that old setback that has been long since eclipsed by subsequent hardware advancements.
Modern hardware is stupidly fast compared to what existed at the time that a lot of C/C++ projects first started. My M2 MacBook Air has 5x higher multi-core performance than my previous daily driver (a 2015 MacBook Pro, a highly capable machine in its own right), and the new iPhone is now even faster than that. I'd happily accept a worst-case 4x slowdown of all user space C/C++ code in the interest of security, especially when considering how much of that code is going to be written by AI going forward.
I do not think this is niche in the slightest. I would very happily take a 2-4x slowdown for almost all of the web facing C software I run if I get guaranteed memory safety. I will be using at the very least fil-c openssh (and likely much more) on every machine I run.
Sure, that makes sense. The point I’m making is just that from an engineering perspective, that also implies that there is no longer any reason for that software you’re running to be written in C at all.
From an engineering perspective, the software is already written in C, and you're weighing the tradeoffs between rewriting it and recompiling it.
Sure there is. Making tough choices between alternatives based on where to allocate a limited amount of manpower is an engineering choice. Choosing to use Fil-C to recompile existing (established, stabilized, functional...) software rather than rewrite it is an engineering choice.
Apologies ahead of time as this is pure FUD, That is I don't actually know what I am talking about but had an interesting thought.
Remember the Debian weak keys kerfuffle, That was caused because the Debian package maintainer saw a warning about using uninitialized memory, fixed it, and then it turned out that uninitialized memory was a critical seed for the openssl random number generator.
Anyhow my stupid FUD thought. is there a weak-key equivalent bug that shows up now that your C compiler is memory safe?
Even if you can't use something like Fil-C in your release/production builds, being able to e.g. compile unit tests with it to catch memory safety bugs is a huge win. My team use gcc for its mips codegen, but I'm working on adopting the clang bounds-safety annotations for test builds for exactly this reason.
Yeah, I haven't yet taken a serious look into it from that perspective yet, but similar came to mind; while, outside of bootstrapping the JDK from GCJ, Boehm GC hasn't been super relevant to me for "release" builds of anything, it's been useful in leak detection mode on occasion.
I figure even if you cannot use, or do not want to use, something like Fil-C in production, there's solid potential for it to augment whatever existing suite of sanitizers and other tools that one may already build against.
The point is that it can compile most existing C and C++ code as-is, and do it while providing complete memory safety.
That's the claim, anyway. Doesn't sound all that niche to me.
If you write your software in a language that needs GC, everybody using your software needs GC, but they're guaranteed to get memory safety.
If you write your software in an unsafe, non-GC language, nobody needs GC, but nobody gets memory safety either.
This is why many software developers chose the latter option. If there were some use cases in which GC wasn't acceptable for their software, nobody would get GC, even the people who could afford it, and would prefer the increased memory safety.
Fil-C lets the user make this tradeoff. If you can accept the GC, you compile with Fil-C, otherwise you use a traditional C compiler.
The user of the code may plausibly want to make a different tradeoff than the author, without wanting to rewrite the project from scratch.
The value prop here is for existing projects in C or C++, as is made abundantly clear in the linked article
I expect Fil-C is not really aimed at green field projects. But rather at making existing projects safe.
I would say that Rust would be a better choice rarher than patching memory safety on top of C. But I think the reason for this is that most, if not all, cryptographic reference implementations are in C. So they want to use existing reference implementations without having to port them to Rust.
IMO cryptographers should start using Rust for their reference implementations, but I also get that they'd rather spend their time working on their next paper rather than learning a new language.
I'm not a practioner of cryptography, but I would be wary about timing attacks that might become possible if such a dynamic runtime is introduced. At least relevant pieces of code would need to be re-evaluated in the Fil-C environment.
But maybe you could use C as the "glue language" and then the build better performing libraries in Rust for C to use. Like in Python!
Good call! Fil-C does in fact have a way to let you build and run OpenSSL with its constant time crypto. I don't know how this works exactly but I guess it's relatively easy to guarantee it's safe.
1 reply →
The original poster got pretty much all of Debian running in Fil-C, in a fairly brief amount of time.
Re-writing even a single significant library or package in Rust would take exponentially longer, so in this case Rust would not be "a better choice", but rather a non-starter.
Memory safety is a very small concern for most cryptographic implementations (e.g Side Channel attacks). Rust solves essentially none of the other concerns.
IIRC SHA3's reference implementation had an integer overflow in a counter that made finding collisions trivial, as it meant that some blocks of the input weren't considered.
> IMO cryptographers should start using Rust for their reference implementations
IMO they should not, because if I look at a typical Rust code, I have no clue what is going on even with a basic understanding of Rust. C, however, is as simple as it gets and much closer to pseudocode.
Good cryptographic code should match its algorithmic description. Rust enables abstractions that allow this. C does not. That you have some familiarity with C and not Rust should not be a contributing factor.
I say this as someone who has written cryptographic code that’s been downloaded millions of times.
6 replies →