Bugs Rust won't catch

14 days ago (corrode.dev)

Hi, I am one of the maintainers of GNU Coreutils. Thanks for the article, it covers some interesting topics. In the little Rust that I have used, I have felt that it is far too easy to write TOCTOU races using std::fs. I hope the standard library gets an API similar to openat eventually.

I just want to mention that I disagree with the section titled "Rule: Resolve Paths Before Comparing Them". Generally, it is better to make calls to fstat and compare the st_dev and st_ino. However, that was mentioned in the article. A side effect that seems less often considered is the performance impact. Here is an example in practice:

  $ mkdir -p $(yes a/ | head -n $((32 * 1024)) | tr -d '\n')
  $ while cd $(yes a/ | head -n 1024 | tr -d '\n'); do :; done 2>/dev/null
  $ echo a > file
  $ time cp file copy

  real 0m0.010s
  user 0m0.002s
  sys 0m0.003s
  $ time uu_cp file copy

  real 0m12.857s
  user 0m0.064s
  sys 0m12.702s

I know people are very unlikely to do something like that in real life. However, GNU software tends to work very hard to avoid arbitrary limits [1].

Also, the larger point still stands, but the article says "The Rust rewrite has shipped zero of these [memory saftey bugs], over a comparable window of activity." However, this is not true [2]. :)

[1] https://www.gnu.org/prep/standards/standards.html#Semantics [2] https://github.com/advisories/GHSA-w9vv-q986-vj7x

  • Indeed, std::fs suffers from being a lowest common denominator. Rust had to have something at 1.0, and unfortunately it stayed like that.

    Rust uutils would be a good place to design a more foolproof replacement for Rust's std::fs API.

    • Unix embodies this, as well.

      When K&R created unix and C there was still the better option of moving changes that were better to have in the "kernel" into the kernel.

      Now we have "standards" that even cause headaches between Linux and BSD's.

      Linux back-propagates stuff like mmap, io_uring, etc. to where it belongs. In this way it is like the original unix. And deservedly running on most servers out there.

  • First of all, thank you for presenting a succinct take on this viewpoint from the other side of the fence from where I am at.

    So how can I learn from this? (Asking very aggressively, especially for Internet writing, to make the contrast unmistakable. And contrast helps with perceiving differences and mistakes.) (You also don’t owe me any of your time or mental bandwidth, whatsoever.)

    So here goes:

    Question 1:

    How come "speed", "performance", race conditions and st_ino keep getting brought up?

    Speed (latency), physically writing things out to storage (sequentially, atomically (ACID), all of HDD NVME SSD ODD FDD tape, "haskell monad", event horizons, finite speed of light and information, whatever) as well as race conditions all seem to boil down to the same thing. For reliable systems like accounting the path seems to be ACID or the highway. And "unreliable" systems forget fast enough that computers don’t seem to really make a difference there.

    Question 2:

    Does throughput really matter more than latency in everyday application?

    Question 3 (explanation first, this time):

    The focus on inode numbers is at least understandable with regards to the history of C and unix-like operating systems and GNU coreutils.

    What about this basic example? Just make a USB thumb drive "work" for storing files (ignoring nand flash decay and USB). Without getting tripped up in libc IO buffering, fflush, kernel buffering (Hurd if you prefer it over Linux or FreeBSD), more than one application running on a multi-core and/or time-sliced system (to really weed out single-core CPUs running only a single user-land binary with blocking IO).

    • Coreutils are not only used in interactive contexts. They are the primitives that make up the countless shell scripts which glue systems together. Any edge case will be encountered and the resulting poor performance will impact somebody, somewhere.

      Here's a related example of what happens when you change a shell primitive's behavior - even interactively. Back in the 2000s, Linux distributions started adding color output to the ls command via a default "alias ls=/bin/ls --color=auto". You know: make directories blue, symlinks cyan, executables purple; that kind of thing. Somebody thought it would be a nice user experience upgrade.

      I was working at a NAS (NFS remote box) vendor in tech support. We frequently got calls from folks who had just switched to Linux from Solaris, or had just moved their home directories from local disk to NFS. They would complain that listing a directory with a lot of files would hang. If it came back at all, it would be in minutes or hours! The fix? "unalias ls". Because calling "/bin/ls" would execute a single READDIR (the NFS RPC), which was 1 round-trip to the server and only a few network packets; but calling "/bin/ls --color=auto" would add a STAT call for every single file in the directory to figure out what color it should be - sequentially, one-by-one, confirming the success of each before the next iteration. If you had 30,000 files with a round-trip time of 1ms that's 30 seconds. If you had millions...well, either you waited for hours or you power-cycled the box. (This was eventually fixed with NFSv3's READDIRPLUS.)

      Now I'm sure whomever changed that alias did not intend it, but they caused thousands of people thousands of hours of lost productivity. I was just one guy in one org's tech support group, and I saw at least a dozen such cases, not all of which were lucky enough to land in the queue of somebody who'd already seen the problem.

      So I really appreciate GNU coreutils' commitment to sane behavior even at the edges. If you do systems work long enough, you will ride those edges, and a tool which stays steady in your hand - or script - is invaluable.

      18 replies →

    • > Does throughput really matter more than latency in everyday application?

      In my experience latency and throughput are intrinsically linked unless you have the buffer-space to handle the throughput you want. Which you can't guarantee on all the systems where GNU Coreutils run.

      1 reply →

    • > Question 2:

      > Does throughput really matter more than latency in everyday application?

      IME as a user, hell yes

      Getting a video I don't mind if it buffers a moment, but once it starts I need all of that data moving to my player as quickly as possible

      OTOH if there's no wait, but the data is restricted (the amount coming to my player is less than the player needs to fully render the images), the video is "unwatchable"

      8 replies →

  • Sorry, complete noob here. Why didn't you just cd into $(yes a/ | head -n $((32 * 1024)) | tr -d '\n')? Why do you need to use the while loop for cd?

    EDIT: got it. -bash: cd: a/a/a/....../a/a/: File name too long

    • No need to apologize at all. Doing it in one cd invocation would fail since the file name is longer than PATH_MAX. In that case passing it to a system call would fail with errno set to ENAMETOOLONG.

      You could probably make the loop more efficient, but it works good enough. Also, some shells don't allow you to enter directories that deep entirely. It doesn't work on mksh, for example.

      7 replies →

  • I don't know if you're aware, but there is a demonstration of wget (a fellow "gnu utility", right?) being auto-translated to a memory-safe subset of C++ [1]. Because the translation essentially does a one-for-one substitution of potentially unsafe C elements with safe C++ counterparts that mirror the behavior, the translation should be much less susceptible to the introduction of new bugs and behaviors in the way a rewrite would be.

    With a little cleaning-up of the original code, the code translation ends up being fully automatic and so can be used as a build step to produce (slightly slower) memory-safe executables from the original C source.

    [1] https://duneroadrunner.github.io/scpp_articles/PoC_autotrans...

    • Filesystem access is mostly treated by users as serialized ACID transactions on "files in directories."

      "Managing this resource centrally" is where unix syscalls came from. An OS kernel can be used like a specialized library for ACID transactions on hardware singletons.

      People then got fancy with virtual memory, interrupts, signals, time-slicing, re-entrancy, thread-safety, and injectivity.

      It doesn’t matter, whether you call the "kernel library" from C, C++, Fortan, BASIC, Golang, bash, Rust, etc.

  • Probably a dumb question, but is GNU Core utils interested in / planning on doing its own rust rewrite?

    • At the current moment I would be against it. The language and library is changing too fast. Also, Rust has some other things that make it hard to use for coreutils. For example, Rust programs always call signal (SIGPIPE, SIG_IGN) or equivalent code before main(). There is no stable way to get the longstanding behavior of inheriting the signal action from the parent process [1]. This is quite annoying, but not unique to Rust [2].

      [1] https://doc.rust-lang.org/beta/unstable-book/compiler-flags/... [2] https://www.pixelbeat.org/programming/sigpipe_handling.html

      1 reply →

    • Thomas Jefferson famously said that "A coreutils rewrite every now and again is a good thing". Or something like that.

      When I was a beta tester for System Vr2 Unix, I collected as many bug reports as possible from Usenet (I used the name "the shell answer man". Looking back I conclude that arrogance is generally inversely proportional to age) and sent a patch for each one I could verify. Something like 100 patches.

      So if this rust rewrite cleans up some issues, it's a good thing.

  • I see even the coreutils maintainers find themselves needing -n (no newlines) and -c (count) options to "yes".

    • GNU coreutils is known for adding command libe options.

      One of the big philosophical differences to the BSD's.

      For a human being, it sucks both ways.

  • >the article says "The Rust rewrite has shipped zero of these [memory saftey bugs], over a comparable window of activity." However, this is not true

    That bug got fixed before the Ubuntu release, and is from way before Canonical was even involved with the project.

    • In the given list of GNU CVEs in the original article, it included a buffer overrun in tail from 2021. So for a fair comparison 2021 is part of the "window of activity" (the year uu_od CVE was published).

  • To be fair, Vec::set_len bug in Rust was in 2021. And even then it had to be annotated as `unsafe`. It was then deprecated and a linter check was added: https://github.com/rust-lang/rust-clippy/issues/7681

    • To be even fair-er, it wasn't actually memory unsafety, it was "just" unsoundness, there was a type, that IF you gave it an io reader implementation that was weird, that implementation could see uninit data, or expose uninit data elsewhere, but the only readers actually used were well behaved readers.

      1 reply →

> What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing

They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls. Most of those mistakes are exceedingly amateur from the perspective of long-time GNU coreutils (or BSD or Solaris base) developers, issues that were identified and largely hashed out decades ago, notwithstanding the continued long tail of fixes--mostly just a trickle these days--to the old codebases.

  • Reading that Canonical thread was jaw-dropping. Paraphrased: "Rust is more secure, security is our priority, therefore deploying this full-rewrite of core utils is an emergency. If things break that's fine, we'll fix it :)".

    I would not want to run any code on my machines made by people who think like this. And I'm pro-Rust. Rust is only "more secure" all else being equal. But all else is not equal.

    A rewrite necessarily has orders of magnitude more bugs and vulnerabilities than a decades-old well-maintained codebase, so the security argument was only valid for a long-term transition, not a rushed one. And the people downplaying user impact post-rollout, arguing that "this is how we'll surface bugs", and "the old coreutils didn't have proper test cases anyway" are so irresponsible. Users are not lab rats. Maintainers have a moral responsibility to not harm users' systems' reliability (I know that's a minority opinion these days). Their reasoning was flawed, and their values were wrong.

    • This leaves such a bad taste in my mouth. If you fucking found 44 CVEs with some relatively amateurish ones (I'm no security engineer but even I've done that exact TOCTOU mitigation before) in such a core component of your system a month before 26.04 LTS release (or a couple months if you count from their round 1), surely the response should be "we need to delay this to 28.04 LTS to give it time to mature", not "we'll ship this thing in LTS anyway but leave out the most obviously problematic parts"?

      The snap BS wasn't enough to move me since I was largely unaffected once stripping it out, but this might finally convince me to ditch.

      4 replies →

  • More than that: it seems that Rust stdlib nudges the developer towards using neat APIs at an incorrect level of abstraction, like path-based instead of handle-based file operations. I hope I'm wrong.

    • Nearly every available filesystem API in Rust's stdlib maps one-to-one with a Unix syscall (see Rust's std::fs module [0] for reference -- for example, the `File` struct is just a wrapper around a file descriptor, and its associated methods are essentially just the syscalls you can perform on file descriptors). The only exceptions are a few helper functions like `read_to_string` or `create_dir_all` that perform slightly higher-level operations.

      And, yeah, the Unix syscalls are very prone to mistakes like this. For example, Unix's `rename` syscall takes two paths as arguments; you can't rename a file by handle; and so Rust has a `rename` function that takes two paths rather than an associated function on a `File`. Rust exposes path-based APIs where Unix exposes path-based APIs, and file-handle-based APIs where Unix exposes file-handle-based APIs.

      So I agree that Rust's stdilb is somewhat mistake prone; not so much because it's being opinionated and "nudg[ing] the developer towards using neat APIs", but because it's so low-level that it's not offering much "safety" in filesystem access over raw syscalls beyond ensuring that you didn't write a buffer overflow.

      [0]: https://doc.rust-lang.org/std/fs/index.html

      20 replies →

    • After reading this article, I'm inclined to think that the right thing for this project to do is write their own library that wraps the Rust stdlib with a file-handle-based API along with one method to get a file handle from a Path; rewrite the code to use that library rather than rust stdlib methods, and then add a lint check that guards against any use of the Rust standard library file methods anywhere outside of that wrapper.

      3 replies →

    • Unfortunately, it's not the Rust stdlib, it's nearly every stdlib, if not every one. I remember being disappointed when Go came out that it didn't base the os module on openat and friends, and that was how many years ago now? I wasn't really surprised, the *at functions aren't what people expect and probably people would have been screaming about "how weird" the file APIs were in this hypothetical Go continually up to this very day... but it's still the right thing to do. Almost every language makes it very hard to do the right thing with the wrong this so readily available.

      I'm hedging on the "almost" only because there are so many languages made by so many developers and if you're building a language in the 2020s it is probably because you've got some sort of strong opinion, so maybe there's one out there that defaults to *at-style file handling in the standard library because some language developer has the strong opinions about this I do. But I don't know of one.

      2 replies →

    • If anything, I find the rust standard library to default to Unix too much for a generic programming language. You need to think very Unixy if you want to program Rust on Windows, unless you're directly importing the Windows crate and foregoing the Rust standard library. If you're writing COBOL style mainframe programs, things become even more forced, though I doubt the overlap between Rust programmers and mainframe programmers that don't use a Unix-like is vanishingly small.

      This can also be a pain on microcontrollers sometimes, but there you're free to pretend you're on Unix if you want to.

      19 replies →

  • > They knew how to write Rust, but clearly weren't sufficiently experienced with Unix APIs, semantics, and pitfalls.

    The point of Rust is that you shouldn't have to worry about the biggest, easiest to fall in pitfalls.

    I think the author's point of this article, is that a proper file system API should do the same.

  • Having panics in these are pretty amateur hour even just on a Rust level. I could see if they were like alloc errors which you can't handle, but expect and unwraps are inexcusable unless you are very carefully guarding them with invariants that prevent that code path from ever running.

  • Someone once coined a related term, "disassembler rage". It's the idea that every mistake looks amateur when examined closely enough. Comes from people sitting in a disassembler and raging the high level programmers who had the gall to e.g. use conditionals instead of a switch statement inside a function call a hundred frames deep.

    We're looking solely at the few things they got wrong, and not the thousands of correct lines around them.

    • Thing is, these tools are so critical that even one error may cause systems to be compromised; rewriting them should never be taken lightly.

      (Actually ideally there's formal verification tools that can accurately test for all of the issues found in this review / audit, like the very timing specific path changes, but that's a codebase on its own)

      1 reply →

    • When I read the article I came away with the impression that shipping bugs this severe in a rewrite of utils used by hundreds of millions of people daily (hourly?) isn’t ok. I don’t think brushing the bad parts off with “most of the code was really good!” is a fair way to look at this.

      Cloudflare crashed a chunk of the internet with a rust app a month or so ago, deploying a bad config file iirc.

      Rust isn’t a panacea, it’s a programming language. It’s ok that it’s flawed, all languages are.

      41 replies →

  • Memory safety catches buffer overflows. CI catches logic bugs. Neither catches the Unix API gotchas nobody documented.

    • They're not API gotchas in most cases.

      And writing comprehensive tests for this behaviour is very difficult regardless of which language you are using.

      I am all for rust rewrites of things. But in this case, these are mistakes which were encouraged by the lazy design of `std::fs` and the developers' lack of relevant experience.

      And to clarify, I don't blame the developers for lacking the relevant experience. Working on such a project is precisely the right place to learn stuff like this.

      I think it's an absurdly dumb move by Canonical to take this project and beta-test it on normal users' machines though…

  • Seems pretty impressive they rewrote the coreutils in a new language, with so little Unix experience, and managed to do such a good job with very little bugs or vulns. I would have expected an order of magnitude more at least.

    Shows how good Rust is, that even inexperienced Unix devs can write stuff like this and make almost no mistakes.

One thing that's hard about rewriting code is that the original code was transformed incrementally over time in response to real world issues only found in production.

The code gets silently encumbered with those lessons, and unless they are documented, there's a lot of hidden work that needs to be done before you actually reach parity.

TFA is a good list of this exact sort of thing.

Before you call people amateur for it, also consider it's one of the most softwarey things about writing software. It was bound to happen unless coreutils had really good technical docs and included tests for these cases that they ignored.

  • good example from the article: the chroot+nss CVE. the rule that nss is dynamic and dlopens libraries from inside the chroot isn't anywhere obvious. it's encoded in 25+ years of sysadmins finding it out. clean-room rewrites end up re-learning that, usually as new CVEs. and LLM ports of the same code inherit the problem: the function signature is what they read, but the scars are what they need.

    • > the function signature is what they read, but the scars are what they need.

      This feels like a golden quote. Don't know if you intended for it to rhyme, but well done :D

      1 reply →

  • > The code gets silently encumbered with those lessons, and unless they are documented, there's a lot of hidden work that needs to be done before you actually reach parity.

    It should be stressed that failure to document such lessons, or at least the bugs/vulnerabilities avoided, is poor practice. Of course one can't document the bugs/vulnerabilities one has avoided implicitly by writing decent code to begin with, but it is important to share these lessons with the future reader, even if that means "wasting" time and space on a bunch of documentation such as "In here we do foo instead of bar because when we did bar in conditions ABC then baz happens which is bad because XYZ."

  • What's even harder is doing that while trying to avoid the GPL, so doing that without reading the original source code.

    uutils would be so much better imo if it was GPL and took direct inspiration from the coreutils source code.

I struggle to find anything on this post that wouldn't be caught by some kind of unit test or manual review, especially when comparing with the GNU source for the coreutils. The whole coreutils rewrite is a terrible idea[1] and clearly being done in the wrong way (without the knowledge gained from the previous software).

If you do a rewrite, you should fully understand and learn from the predecessor, otherwise youre bound to repeat all the mistakes. Embarassing.

To be clear; I love Rust, I use it for various projects, and it's great. It doesn't save you from bad engineering.

[1]: https://www.joelonsoftware.com/2000/04/06/things-you-should-...

  • > I struggle to find anything on this post that wouldn't be caught by some kind of unit test or manual review, especially when comparing with the GNU source for the coreutils.

    > If you do a rewrite, you should fully understand and learn from the predecessor, otherwise youre bound to repeat all the mistakes. Embarassing.

    Interestingly, the uutils project uses the GNU coreutils test suite.

    EDITED to add: they also have a stated position of not allowing contributions based on reading the GPL'd source.

  • welcome new systems programmers: unix is broken and you must write ugly non-pedagogical workarounds and do empirical testing. this is what reliable software and good software engineering actually is... surprise!@#%

> The pattern is always the same. You do one syscall to check something about a path, then another syscall to act on the same path. Between those two calls, an attacker with write access to a parent directory can swap the path component for a symbolic link. The kernel re-resolves the path from scratch on the second call, and the privileged action lands on the attacker’s chosen target.

It's actually even worse than that somewhat, because the attacker with write access to a parent directory can mess with hard links as well... sure, it only messes with the regular files themselves but there is basically no mitigations. See e.g. [0] and other posts on the site.

[0] https://michael.orlitzky.com/articles/posix_hardlink_heartac...

  • hmm... maybe a 'write lock' on the directory? though this will become more hairy without timeouts/etc...

    • To the extent that locking exists in posix it is various degrees of useless and broken. And as far as I know while BSDs have extensions which make some use cases workable Linux is completely hopeless.

The root cause of some of the bugs seems to be the opaque nature of some of the Unix API. E.g.

> The trap is that get_user_by_name ends up loading shared libraries from the new root filesystem to resolve the username. An attacker who can plant a file in the chroot gets to run code as uid 0.

To me such a get_user_by_name function is like a booby trap, an accident that is waiting to happen. You need to have user data, you have this get_user_by_name function, and then it goes and starts loading shared libraries. This smells like mixing of concerns to me. I'd say, either split getting the user data and loading any shared libraries in two separate functions, or somehow make it clear in the function name what it is doing.

  • > The root cause of some of the bugs seems to be the opaque nature of some of the Unix API.

    Some, maybe, but if you've decided to rewrite coreutils from scratch, understanding the POSIX APIs is literally your entire job.

    And in any case, their test for whether a path was pointing to the fs root was `file == Path::new("/")`. That's not an API problem, the problem is that whoever wrote that is uniquely unqualified to be working on this project.

    • Interestingly, it looks like the `file == Path::new("/")` bit was basically unchanged from when it was introduced... 12 (!) years ago [0] (though back then it was `filename == "/"`). The change from comparing a filename to a path was part of a change made 8 months ago to handle non-UTF-8 filenames.

      > That's not an API problem, the problem is that whoever wrote that is uniquely unqualified to be working on this project.

      To be fair, uutils started out with far smaller ambitions. It was originally intended to be a way to learn Rust.

      [0]: https://github.com/uutils/coreutils/commit/7abc6c007af75504f...

    • > Some, maybe, but if you've decided to rewrite coreutils from scratch, understanding the POSIX APIs is literally your entire job.

      Yes, it is. But still such traps in API just unacceptable. If you design API that requires obscure knowledge to do it right, and if you do it wrong you'll get privilege escalation, it is just... just... I have no words for it. It is beyond stupidity. You are just making sure that your system will get these privilege escalations, and not just once, but multiple times.

      1 reply →

  • > The root cause of some of the bugs seems to be the opaque nature of some of the Unix API.

    Seems and smells is weasel words. The root cause is not thinking: Why is root chrooting into a directory they do not control?

    Whatever you chroot into is under control of whoever made that chroot, and if you cannot understand this you have no business using chroot()

    > To me such a get_user_by_name function is like a booby trap

    > I'd say, either split getting the user data and loading any shared libraries in two separate functions, or somehow make it clear in the function name what it is doing.

    You'd probably still be in the trap: there's usually very little difference between writing to newroot/etc/passwd and newroot/usr/lib/x86_64-linux-gnu/libnss_compat.so or newroot/bin/sh or anything else.

    So I think there's no reason for /usr/sbin/chroot look up the user id in the first place (toybox chroot doesn't!), so I think the bug was doing anything at all.

    • > The root cause is not thinking: Why is root chrooting into a directory they do not control?

      Because you can't call chroot(2) unless you're root. And "control a directory" is weasel words; root technically controls everything in one sense of the word. It can also gain full control (in a slightly different sense of the word) over a directory: kill every single process that's owned by the owner of that directory, then don't setuid into that user in this process and in any other process that the root currently executes, or will execute, until you're done with this directory. But that's just not useful for actual use, isn't it?

      Secure things should be simple to do, and potentially unsafe things should be possible.

      3 replies →

  • Rather, I think that using a functional safe language tricks people into thinking that the data it deals with is stateless. Whereas many many things change in operating systems all the time.

    Until we have a filesystem that can present a snapshot, everything has to checked all the time.

    i.e. we need an API which gives input -> good result or failure. Not input -> good result or failure or error.

  • Yes thats one thing Musl libc removes.

    • If the attacker can control newroot/etc/passwd they _still_ get getpwnam to return whatever userid they want. The solution is to not lookup --userspec=username:group inside the chrooted-space, but from outside.

      Also, hi how's things? :)

      2 replies →

Ok if there were some rust guys rewriting coreutils with no experience in linux, but how come Ubuntu accepted it into its mainline?

  • Because it's Ubuntu policy to replace some foundational part of the system with some janky unfinished experiment in every release.

    I agree with you that that's more the story here than "OMG, somebody wrote Rust code with bugs in it".

    • Right? Canonical wanted (still wants?) to use a coreutils implementation where "rm ./" would print "invalid input" while silently deleting the directory anyway.

      I don't really care that some very amateur enthusiasts wrote some bad code for fun, but how in the world did anyone who knows anything about linux take this seriously as a coreutils replacement?

I'm totally fine with people experimenting and making amateur attempts at what adult people do. After all, that's how we grow. What I'm actually curious about is how the decision-making chain at Ubuntu got so messed up that this made it into production.

> What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing

So does this mean that neither did the original utils have any test harness, the process of rewriting them didn't start by creating one either?

Sure there are many edge cases, but surely the OS and FS can just be abstracted away and you can verify that "rm .//" actually ends up doing what is expected (Such as not deleting the current directory)?

This doesn't seem like sloppy coding, nor a critique of the language, it's just the same old "Oh, this is systems programming, we don't do tests"?

Alternatively: if the original utils _did_ have tests, and there were this many holes in the tests, then maybe there is a massive lack in the original utils test suite?

  • > So does this mean that neither did the original utils have any test harness, the process of rewriting them didn't start by creating one either?

    Yes.

    > Sure there are many edge cases, but surely the OS and FS can just be abstracted away and you can verify that "rm .//" actually ends up doing what is expected (Such as not deleting the current directory)?

    I think people have been trying that since before I was born and haven't yet been successful, so I am much less sure than you are.

    For example: How do you decide how many `/` characters to try?

    For a better one: Can you imagine if "rm" could simply decide to refuse to delete files containing "important" as first 9 bytes? How would you think of a test for something like that without knowing the letters in that order? What if the magic word wasn't in a dictionary?

    > This doesn't seem like sloppy coding, nor a critique of the language, it's just the same old "Oh, this is systems programming, we don't do tests"?

    I've never heard anyone say that except as a straw man.

    I've heard people say tests don't do what people think they do.

  • > Sure there are many edge cases, but surely the OS and FS can just be abstracted away and you can verify that "rm .//" actually ends up doing what is expected ?

    This is one reason why Windows disables symlinks by default, and it's not an abstraction but wholesale removal of a feature. Unixes can't do that without breaking decades of software that relies on their existence.

    MacOS does something similar, for example the chroot() bug isn't an issue in practice because MacOS forbids chroot() by default (you need to disable system integrity protection).

    The fundamental problem is caused by the POSIX APIs. They have sharp edges by their very nature. The "fix" is to remove them.

  • My understanding is the uutils development process involved extensive testing against the behaviour of the original utilities, including preserving bugs.

    • But we still have CVE's for trivial things? I mean just a medium sized test suite for "rm" alone should probably be many thousand test cases or so. And you'd think that deleting "." and "./" respectively would be among them? Hindsight is always 20/20 and for inputs involving text input you can never be entirely covered, but still....

    • If something as basic as "rm ./" is broken, the word "extensive" does not apply to whatever testing there was.

To be fair these are mostly gotchas with Linux and not Rust itself, but I guess the std in Rust could handle some of these issues, in that a std should not allow you to shoot yourself in the foot by default.

That’s a great article, and indeed a very good blog. Just spent ages reading lots of their other articles.

Of the bugs mentioned I think the most unforgivable one is the lossy UTF conversion. The mind boggles at that one!

Thanks for the list. I like these lists, so I can put them into a .md file, then launch "one agent per file" on my codebase and see if they can find anything similar to the mentioned CVEs.

Rust won't catch it, but now the agents will.

Edit: https://gist.github.com/fschutt/cc585703d52a9e1da8a06f9ef93c... for anyone who needs copying this

  • Most (if not all) of these issues do not matter at all outside the scope GNU utils run in.

    For example, using filepaths instead of FDs does not matter in most cases in controlled server environments, or in processes that will never run with elevated privilege (most apps).

    • > Most (if not all) of these issues do not matter at all outside the scope GNU utils run in.

      I suspect that attitude is how we got ourselves into this mess.

      You have to assume you ultimately don't control what scope your software runs in. Obviously you do, 99.999% of the time. The other 0.0001% is when someone has found another vulnerability that lets them run your program with elevated privileges in an environment you didn't expect, and then they can use it to exploit one of these bugs. Almost all exploits use a chain of vulnerabilities each one seemingly mostly harmless - your "no one can ever exploit this weakness in my program because I control the environment" will be just one step in the chain.

      That sounds far fetched. It is far fetched in the sense that it almost never happens. But nonetheless systems were and are exploited because of it. Once the solution was added in 2006 (openat() and friends), it should have never happened again. And indeed in the GNU utils it can't.

      The people who build Rust's std::fs should have been aware of the problem and its solution because it was written in 2015. std::path was written at the same time, and that is where the change has to be made. It's not a big change either: std::path has to translate the path into a OS descriptor use that instead of the path - but only if it was available. I suspect the real issue was they had the same attitude as you, they thought it affects such a small percentage of programs it didn't really matter. That and it's a little bit of extra work.

      It was a pity they had that attitude, because the extra work would have avoided this mess.

> The trap is that get_user_by_name ends up loading shared libraries from the new root filesystem to resolve the username.

That's kind of horrifying. Is there a reliable list somewhere of all the functions that do that? Is that list considered stable?

  • Nope! But basically, expect anything that resolves usernames, or host names, to be done in the userspace by NSS.

        Sun engineers Thomas Maslen and Sanjay Dani were the first to design and implement
        the Name Service Switch. They fulfilled Solaris requirements with the nsswitch.conf
        file specification and the implementation choice to load database access modules as
        dynamically loaded libraries, which Sun was also the first to introduce.
    
        Sun engineers' original design of the configuration file and runtime loading of name
        service back-end libraries has withstood the test of time as operating systems have
        evolved and new name services are introduced. Over the years, programmers ported the
        NSS configuration file with nearly identical implementations to many other operating
        systems including FreeBSD, NetBSD, Linux, HP-UX, IRIX and AIX.[citation needed] More
        than two decades after the NSS was invented, GNU libc implements it almost identically.
    

    It's by design, you see.

> These are noisy in test code where panicking on bad data is exactly what you want. The cleanest way to scope them to non-test code is to put #![cfg_attr(test, allow(clippy::unwrap_used, clippy::expect_used, clippy::panic, clippy::indexing_slicing, clippy::arithmetic_side_effects))] at the top of each crate root, or to gate #[allow(...)] on the individual #[cfg(test)] modules.

Surely there's a better way.

  • Clippy doesn't even run on unit tests by default. Honestly it doesn't seem very useful to have it do so for ordinary development, but maybe you'd want to run Clippy on your unit tests in CI just to be extra safe, in which case you could encode those allowed lints in the line of your CI config where you run `cargo clippy`, e.g. `cargo clippy -- -A unwrap_used -A expect_used -A panic -A indexing_slicing -A arithmetic_side_effects`, if you really didn't want to have them in the source for whatever reason.

    • Delaying the run of clippy until CI would be annoying, because then you'd get a build failure for something that was preventable and could have been quickly addresses during development before pushing. Just feels like a pebble in your shoe.

So it's basically failing on - necessary atomicity for filesystem operation - annoying path & string encoding - inertia for historical behaviors

  • I'm comfortable saying that "annoying path & string encoding" is encompassed by "inertia for historical behaviors". :P

I have to partially disagree with applying Hyrum's law here. In the case of core utils, there's not just the common GNU version. There's also what POSIX says they should do and what the various BSD does, plus some other implementations from various vendors that we mostly forget about. If in any case what this version of Core Utils does is different from what GNU does in a way that others are also different, it would be a good thing to break behavior because anyone's script already is wrong in ways that are going to matter in the real world and it may matter in the future anyway, so breaking them now is good. If your script depends on GNU's behavior, then you shouldn't be calling the standard version. You should be explicitly specifying the GNU version. That is, don't use CP. Use GNU-CP or whatever it is commonly installed at. Or you check for what version of CP you have.

  • But if you seek to replace coreutils (as at least is the case with Canonical it seems), rather than just be another POSIX userland implementation (e.g. busybox), then I would suggest you do need to be bug-compatible? I can apt/dnf/apk install busybox and use that for my user rather than coreutils, but given a significant amount of Linux infrastructure (including likely many personal scripts) are tied to coreutils, the bar is much higher. Given the numerous issues with quality Canonical has had, not just with Ubuntu but their other "commercial" tooling, I'm not sure any rewrite/port, written in rust or otherwise, with Canonical developing, managing, or even being associated with the project can meet the requisite bar.

    • As someone who prefers BSD I would make it my goal to become something reasonably popular on linux that isn't different just to force less reliance of the GNUisms in their core utils. Nothing wrong with the GNUisms on the command line, but there are are a lot of GNU assumptions in scripts that should be portable.

rust promised you memory safety and delivered - but turns out the filesystem doesn't care about your borrow checker, and these 44 cves are the receipt

> That means, even if the tools were (and probably still are) buggy, they never had a bug that could be exploited to read arbitrary memory.

Well, that begs the question, is it worse to read arbitrary memory (which would probably in most cases be prevented by various dynamic protections [0] anyway), or failing to prevent rm -rf /./ and killing every process in the system, etc.?

This is still a good case study of the value of the much-touted rust rewrites. Usually they are performed by people who are domain experts in rust, but (as seen here) lack basic domain knowledge of the tool's environment.

[0] https://en.wikipedia.org/wiki/Buffer_overflow_protection

The "kill -1" is hilarious. I wouldn't use ubuntu for production for quite awhile while things shake out or, probably, never (since i don't use ubuntu).

Unrelated but also in the category of bugs Rust won't catch (natively), there are crates that allow C++ style contracts, or more generally, dependent typing and can be used to catch issues at compile time rather than runtime. I use this one, anodized.

https://docs.rs/anodized/latest/anodized/

  • What do you think about the mental load and ergonomics this brings into the code? Also compilation time increase?

    • There is some compile time increase but it brings a lot more guarantees to the code. There was a recent post by a Rust maintainer that he wanted to bring Rust closer to a theorem prover so that as many things as possible can be caught during compile time over time time which might be more disastrous.

I wonder if Rust becomes more popular with AI as Rust can help catch what AI misses, but then if that's the case then what about Haskell, or Lean, or?

  • For core system functionality maybe. But for most applications Rust slow compiler iteration speed becomes a bottleneck when the likes of TypeScript (with Bun) and Go have sub second iteration times.

    Plus AI is also good at catching, in other languages, errors that Rust tooling enforces. Like race conditions, use after free, buffer overflows, lifetimes, etc.

    So maybe AI will become to ultimate "rust checker" for any language.

    • In my experience developing different types of applications in Rust, the claims of a "slow compiler" are overstated. Sub second iteration times are definitely a thing in Rust as well, unless you're adding a new dependency for the first time or building fresh.

      2 replies →

    • The productivity increase I get overall by not having to worry so much about if my rust code will work if it compiles tends to net faster iteration speeds for me. Compile times have never bothered me.

Why differential fuzzing did not catch these bugs?

https://github.com/uutils/coreutils/tree/main/fuzz/uufuzz

> This is the largest cluster of bugs in the audit. It’s also the reason cp, mv, and rm are still GNU in Ubuntu 26.04 LTS. :(

This is what grinds my gears. Why all the hate against GNU?

Honestly, this is why I don't learn Rust, and why I didn't bother to read the rest of the article.

  • Rust does not hate GNU, and I'm not sure why anyone would have that misconception. It would be like saying that C hates GNU because the BSDs aren't GNU. The fact that there is less GNU-licensed Rust software than MIT-licensed Rust software is attributable to the simple fact that, in general, GNU has been ceding ground to MIT for more than 20 years.

    • Nor does the parent comment say that "Rust" hates GNU. A language can't hate anything for that matter.

> The Python one-liner is there because most modern shells refuse to create a non-UTF-8 filename for you.

Both `echo -ne 'weird\xffname\0' > list0` and `printf 'weird\xffname\0' > list0` seem to work fine for me on Linux. Is this macOS-specific?

  • > Both `echo -ne 'weird\xffname\0' > list0` and `printf 'weird\xffname\0' > list0` seem to work fine for me on Linux. Is this macOS-specific?

    Neither of those create a non-UTF-8 filename. (Both files are named "list0", which is valid UTF-8.) They have non-UTF-8 content, but that's not weird.

    But it's not too hard to get a non-UTF-8 filename:

      touch $'\xff'
    

    Both zsh & bash support that syntax.

    (You could also use process substitution with printf, but that's more steps than necessary. So, something closer to your example would be,

      touch "$(printf '\xff')"
    

    You can't put a \0 in the filename, as there's no way to pass that string in C.)

The title of this article should be "Rust can't stop you from not giving a fuck" or "Rust can't give a fuck for you."

---

> What’s notable is that all of these bugs landed in a production Rust codebase, written by people who knew what they were doing

...

[List of bugs a diligent person would be mindful of, unix expert or not]

---

Only conclusion I can make is, unfortunately, the people writing these tools are not good software developers, certainly not sufficiently good for this line of work.

For comparison, I am neither a unix neckbeard nor a rust expert, but with the magic of LLMs I am using rust to write a music player. The amount of tokens I've sunk into watching for undesirable panics or dropped errors is pretty substantial. Why? Because I don't want my music player to suck! Simple as that. If you don't think about panics or errors, your software is going to be erratic, unpredictable and confusing.

Now, coreutils isn't my hobby music player, it's fundamental Internet infrastructure! I hate sounding like a Breitbart commenter but it is quite shocking to see the lack of basic thought going into writing what is meant to be critical infrastructure. Wow, honestly pathetic. Sorry to be so negative and for this word choice, but "shock" and "disappointment" are mild terms here for me.

Anyway, thanks for the author of this post! This is a red flag that should be distributed far and wide.

  • > Pretty shocking to see the lack of basic thought going into writing what is meant to be critical infrastructure

    uutils did not start off as "let's make critical infrastructure in Rust", it started off as "coreutils are small and have tests, so we're rewriting them in Rust for fun". As a result there's needed to be a bunch of cleanup work.

    • Okay, thanks for the context, but aren't distributions eager to adopt these? Are current GNU coreutils a common vulnerability vector?

      > For fun

      My idea of fun is reviewing my code and making sure I'm handling errors correctly so that my software doesn't suck. Maybe the people who are doing this, for fun, should be more aligned with that mentality?

      1 reply →

  • So yeah, their implementation of chmod checked if a path was pointing to the root of the filesystem with 'if file == Path::new("/")'.

    How the f** did this sub-amateur slop end up in a big-name linux distribution? We've de-professionalized software engineering to such a degree that people don't even know what baseline competent software looks like anymore

  • I love Rust, but I wonder if this is an example of the idea that its excellent type system can lull some people into a false sense of security. Particularly when interfacing to low-level code like kernel APIs, which are basically minefields inadvertently designed to trick the unwary, the Rust guarantees are undermined. The extent of this may not be immediately obvious to everyone.

    • This seems to be the case, yes. Before reading this post I was a lot more open minded about the "rewrite it in Rust" scene but now I'm just kind of in a horrorpit wondering whether I'll be stuck on macOS forever :(.

      3 replies →

TIL that

> uutils read it as “send the default signal to PID -1”, which on Linux means every process you can see.

What's the use case for killing all process you can see?

  • Many cases, including as a last resort as part of shutdown, to try to trigger remaining services into a graceful exit (although these days cgroups help avoid ever being in such a situation).

> Rust’s standard library makes this easy to get wrong. The ergonomic APIs you reach for first (fs::metadata, File::create, fs::remove_file, fs::set_permissions) all take a path and re-resolve it every time, rather than taking a file descriptor and operating relative to that. That’s fine for a normal program, but if you’re writing a privileged tool that needs to be secure against local attackers, you have to be careful.

It's not fine even for a normal program, because operations on a large number of files will end up an order of magnitude slower. No matter what language you write your utility in.

... reads the article to the end, marvels at all the problems resulting from not understanding how the OS works and missing 40 years of refinement ...

Is this in an Ubuntu LTS ?!?

Reversing max and min. That's one I've done a lot, and I don't think any compiler could save me from.

> uutils now runs the upstream GNU coreutils test suite against itself in CI. That’s the right scale of defense for this class of bug. That's the minimum, it is absurd that they did not start from that!

  • I recall the last time there was a massive bug in the uutils project, it was because the coreutils tests didn't cover some crucial aspect people relied on. Running these tests is useful for compatibility and all, but it won't necessarily catch security issues.

  • I believe they did it all the time. Maybe it was not automated? But they boasted in news multiple times how many coreutils tests they are passing. I suspect that those tests are useless for security, they are more about compatibility or something like that.

I know nobody's perfect and I'm not asking for perfection, but these bugs are pretty alarming? It seems like these supposed coreutils replacements are being written by people who don't know anything about Unix, and also didn't even bother looking at the GNU tools they are trying to replace. Or at least didn't have any curiosity about why the GNU tools work the way they do. Otherwise they might've wondered about why things operate on bytes and file descriptors instead of strings and paths.

I hate to armchair general, but I clicked on this article expecting subtle race conditions or tricky ambiguous corners of the POSIX standard, and instead found that it seems to be amateur hour in uutils.

  • Few things to note

    1. uutils as a project started back in 2013 as a way to learn Rust, by no means by knowledgeable developers or in a mature language

    2. uutils didn't even have a consideration to become a replacement of GNU Coreutils until.... roughly 2021, I think? 2021 is when they started running compliance/compatibility tests, anyway

    3. The choice of licensing (made in 2013) effectively forbids them from looking at the original source

  • > It seems like these supposed coreutils replacements are being written by people who don't know anything about Unix, and also didn't even bother looking at the GNU tools they were supposed to be replacing.

    They're a group of people who want to replace pro-user software (GPL) with pro-business software (MIT).

    I don't really want them to achieve their goal.

  • They are deliberately not looking at coreutils code because the Rust versions are released as MIT and they don't want the project contaminated by GPL. I am not fond of this, personally.

Seems like typical pattern of

* Let's rewrite thing in X, it is better

* Let's not look at existing code, X is better so writing it from scratch will look nicer

* Whoops, existing code was written like this for a reason

* Whoops, we re-introduce decade+ old problems that original already fixed at some point

I find it interesting how people will criticise Rust for not preventing all bugs, when the alternative languages don't prevent those same bugs nor the bugs rust does catch. If you're comparing Rust to a perfect language that doesn't exist, you should probably also compare your alternative to that perfect language as well right?

I'd be interested in a comparison with the amount of bugs and CVE's in GNU coreutils at the start of its lifetime, and compare it with this rewrite. Same with the number of memory bugs that are impossible in (safe) Rust.

Don't just downvote me, tell me how I'm wrong.

  • What's the point of a "rewrite in Rust" when it introduces bugs that either never existed in the original or were fixed already?

    > I'd be interested in a comparison with the amount of bugs and CVE's in GNU coreutils at the start of its lifetime

    The point is, those bugs had been discovered and fixed decades ago. Do you want to wait decades for coreutils_rs to reach the same robustness? Why do a rewrite when the alternative is to help improve the original which is starting from a much more solid base?

    And even when a complete rewrite would make sense, why not do a careful line-by-line porting of the original code instead of doing a clean-room implementation to at least carry over the bugfixes from the original? And why even use the Rust stdlib at all when it contains footguns that are not acceptable for security-critical code?

    • The Rust developers have not read the original coreutils, because they want to replace the GPL license, so they want to be able to say that their code is not derived from the original coreutils.

      For a project of this kind, this seems a rather stupid choice and it is enough to make hard to trust the rewritten tools.

      Even supposing that replacing the GPL license were an acceptable goal, that would make sense only for a library, not for executable applications. For executable applications it makes sense to not want GPL only when you want to extract parts of them and insert them into other programs.

      1 reply →

    • Idk, you should ask the maintainers these questions, or the Ubuntu maintainers. I'm not particularly arguing in favour of this rewrite, but the title and contents of the post are talking about Rust in general and the type of bugs it can/can't prevent.

      Perhaps one good reason is that once the initial bugs are fixed, over time the number of security issues will be lower than the original? If it could reach the same level of stability and robustness in months or a small number of years, the downsides aren't totally obvious. We will have to wait to judge I suppose. Maybe it's not worth it and that's fine, but it doesn't speak to Rust as a language.

    • > What's the point of a "rewrite in Rust" when it introduces bugs that either never existed in the original or were fixed already?

      Because you are trying to remove memory safety as a source of bugs in the future. No code is bug free, but removing entire categories of bugs from a code base is a good thing.

  • "The alternative languages" - in this case you're talking about C, 99% of the time.

    So let's talk about that. Well written C code, especially for the purpose of writing and continuing to maintain mature GNU coreutils, is not a big risk in terms of CVE. Between having an inexperienced Rust developer and an extremely experienced C developer (who's been through all the motions), I'd say the latter is likely the safer option.

    • > "The alternative languages" - in this case you're talking about C, 99% of the time.

      And that's part of the problem. There's no excuse beyond maybe platform support for starting a brand new project in C, when C++ exists.

    • What an incredibly dishonest argument. Obviously "Well written C code" won't be riddled with CVE's by definition, the problem is that since programs written in C are littered with CVE's, it turns out it's really really difficult to write well written C, even for the best developers. With Rust, that entire class of problems is eliminated entirely.

  • You’re right, but it’s gonna be hard to stop them from raging. In many ways people want to be justified in a „see, I told you so, Rust is useless” belief, and they’re willing to take one or two questionable logical steps to get there.

This is what happens when many people hype about a technology that solves a specific class of vulnerabilities, but it is not designed to prevent the others such as logic errors because of human / AI error.

Granted, the uutils authors are well experienced in Rust, but it is not enough for a large-scale rewrite like this and you can't assume that it's "secure" because of memory safety.

In this case, this post tells us that Unix itself has thousands of gotchas and re-implementing the coreutils in Rust is not a silver bullet and even the bugs Unix (and even the POSIX standard) has are part of the specification, and can be later to be revealed as vulnerabilities in reality.

  • > the uutils authors are well experienced in Rust

    I'm not sure that they were all that experienced in Rust when most of this code was written. uutils has been a bit of a "good first rust issue" playground for a lot of its existence

    Which makes it pretty unsurprising that the authors also weren't all that well versed in the details of low-level POSIX API

  • It's not designed to completely eliminate other bug classes but it is designed to reduce the chance that they happen.

    In this case the filesystem API was perhaps not as well designed as it could have been. That can potentially be fixed though.

    Some of the other bugs would be hard to statically prevent though. But nobody ever claimed otherwise.

I feel like one of the takeaways here is that Rust protects your code as long as what your code is doing stays predictably in-process. Touching the filesystem is always ripe with runtime failures that your programming language just can't protect you from. (Or maybe it also suggests the `std::fs` API needs to be reworked to make some of these occurrences, if not impossible, at least harder.)

On a separate note: I have a private "coretools" reimplementation in Zig (not aiming to replace anything, just for fun), and I'm striving to keep it 100% Zig with no libc calls anywhere. Which may or may not turn out to be possible, we'll see. However, cross-checking uutils I noticed it does have a bunch of unsafe blocks that call into libc, e.g. https://github.com/uutils/coreutils/blob/77302dbc87bcc7caf87.... Thankfully they're pretty minimal, but every such block can reduce the safety provided by a Rust rewrite.

  • > and I'm striving to keep it 100% Zig with no libc calls anywhere. Which may or may not turn out to be possible, we'll see.

    Probably will depend on what platform(s) you're targeting and/or your appetite for dealing with breakage. You can avoid libc on Linux due to its stable syscall interface, but that's not necessarily an option on other platforms. macOS, for instance, can and does break syscall compatibility and requires you to go through libSystem instead. Go got bit by this [0]. I want to say something similar applies to Windows as well.

    This Unix StackExchange answer [1] says that quite a few other kernels don't promise syscall compatibility either, though you might be able to somewhat get away with it in practice for some of them.

    [0]: https://github.com/golang/go/issues/17490

    [1]: https://unix.stackexchange.com/a/760657

    • Since it's a personal project, Linux compatibility is the only thing I care about right now. I'm testing it under WINE as well, just because I can, but I don't have access to Mac OS so I'm skipping that problem entirely for now