← Back to context

Comment by amluto

11 hours ago

I can’t entirely tell what the article’s point is. It seems to be trying to say that many languages can mmap bytes, but:

> (as far as I'm aware) C is the only language that lets you specify a binary format and just use it.

I assume they mean:

    struct foo { fields; };
    foo *data = mmap(…);

And yes, C is one of relatively few languages that let you do this without complaint, because it’s a terrible idea. And C doesn’t even let you specify a binary format — it lets you write a struct that will correspond to a binary format in accordance with the C ABI on your particular system.

If you want to access a file containing a bunch of records using mmap, and you want a well defined format and good performance, then use something actually intended for the purpose. Cap’n Proto and FlatBuffers are fast but often produce rather large output; protobuf and its ilk are more space efficient and very widely supported; Parquet and Feather can have excellent performance and space efficiency if you use them for their intended purposes. And everything needs to deal with the fact that, if you carelessly access mmapped data that is modified while you read it in any C-like language, you get UB.

> correspond to a binary format in accordance with the C ABI on your particular system.

We're so deep in this hole that people are fixing this on a CPU with silicon.

The Graviton team made a little-endian version of ARM just to allow lazy code like this to migrate away from Intel chips without having to rewrite struct unpacking (& also IBM with the ppc64le).

Early in my career, I spent a lot of my time reading Java bytecode into little endian to match all the bytecode interpreter enums I had & completely hating how 0xCAFEBABE would literally say BE BA FE CA (jokingly referred as "be bull shit") in a (gdb) x views.

  • GCC supports specifying endianness of structs and unions: https://gcc.gnu.org/onlinedocs/gcc-15.2.0/gcc/Common-Type-At...

    I'm not sure how useful it is, though it was only added 10 years ago with GCC 6.1 (recent'ish in the world of arcane features like this, and only just about now something you could reasonably rely upon existing in all enterprise environments), so it seems some people thought it would still be useful.

  • I thought all iterations of ARM are little endian, even going back as far to ARM7. same as x86?

    The only big-endian popular arch in recent memory is PPC

  • ARM is usually bi-endian, and almost always run in little endian mode. All Apple ARM is LE. Not sure about Android but I’d guess it’s the same. I don’t think I’ve ever seen BE ARM in the wild.

    Big endian is as far as I know extinct for larger mainstream CPUs. Power still exists but is on life support. MIPS and Sparc are dead. M68k is dead.

    X86 has always been LE. RISC-V is LE.

    It’s not an arbitrary choice. Little endian is superior because you can cast between integer types without pointer arithmetic and because manually implemented math ops are faster on account of being linear in memory. It’s counter intuitive but everything is faster and simpler.

    Network data and most serialization formats are big endian by convention, a legacy from the early net growing on chips like Sparc and M68k. If it were redone now everything would be LE everywhere.

    • > Little endian is superior because you can cast between integer types without pointer arithmetic

      I’ve heard this one several times and it never really made sense. Is the argument that y you can do:

          short s;
          long *p = (long*)&s;
      

      Or vice versa and it kind of works under some circumstances?

      1 reply →

Had the same thought. Also confused at the backhanded compliment that pickle got:

> Just look at Python's pickle: it's a completely insecure serialization format. Loading a file can cause code execution even if you just wanted some numbers... but still very widely used because it fits the mix-code-and-data model of python.

Like, are they saying it's bad? Are they saying it's good? I don't even get it. While I was reading the post, I was thinking about pickle the whole time (and how terrible that idea is, too).

  • The article is saying it's good, or at least good enough. I don't necessarily agree with the rest of the article.

  • A thing can be good and bad. Everything is a tradeoff. The reason why C is 'good' in this instance is the lack of safety, and everything else that makes C, C (see?) but that is also what makes C bad.

Yeah, and as you well put it, it isn't even some snowflake feature only possible in C.

The myth that it was a gift from Gods doing stuff nothing else can make it, persists.

And even on the languages that don't, it isn't if as a tiny Assembly thunk is the end of the world to write, but apparently at a sign of a plain mov people run to the hills nowadays.

  • > And even on the languages that don't, it isn't if as a tiny Assembly thunk is the end of the world to write, but apparently at a sign of a plain mov people run to the hills nowadays.

    Use the right tool for the job. I've always felt it's often the most efficient thing to write a bit of code in assembler, if that's simpler and clearer than doing anything else.

    It's hard to write obfuscated assembler because it's all sitting opened up in front of you. It's as simple as it gets and it hasn't got any secrets.

It’s a terribly useful idea. FTFY.

The program you used to leave your comment, and the libraries it used, were loaded into memory via mmap(2) prior to execution. To use protobuf or whatever, you use mmap.

The only reason mmap isn’t more generally useful is the dearth of general-use binary on-disk formats such as ELF. We could build more memory-mapped applications if we had better library support for them. But we don’t, which I suppose was the point of TFA.

  • Entire libraries are a weird sort of exception. They fundamentally target a specific architecture, and all the nonportable or version dependent data structures are self describing in the sense that the code that accesses them are shipped along with the data.

    And if you load library A that references library B’s data and you change B’s data format but forget to update A, you crash horribly. Similarly, if you modify a shared library while it’s in use (your OS and/or your linker may try to avoid this), you can easily crash any process that has it mapped.

it's not a terrible idea. It has it's uses. You just have to know when to use it and when not to use it.

For example, to have fast load times and zero temp memory overhead I've used that for several games. Other than changing a few offsets to pointers the data is used directly. I don't have to worry about incompatibilities. Either I'm shipping for a single platform or there's a different build for each platform, including the data. There's a version in the first few bytes just so during dev we don't try to load old format files with new struct defs. But otherwise, it's great for getting fast load times.

  • To support your point, it's also used in basically every shared library / DLL system. While usually used "for code", a "shared pure data library" has many applications. There are also 3rd party tools to make this convenient from many PLangs like HDF5, https://github.com/c-blake/nio with its FileArray for Nim, Apache Arrow, etc.

    Unmentioned so far is that defaults for max live memory maps are usually much higher than defaults for max open files. So, if you are careful about closing files after mapping, you can usually get more "range" before having to move from OS/distro defaults. (E.g. for `program foo*`-style work where you want to keep the foo open for some reason, like binding them to many read-only NumPy array variables.)

Why is it such a terrible idea?

No need to add complexity, dependancies and reduced performance by using these libraries.

  • Lots of reasons:

    The code is not portable between architectures.

    You can’t actually define your data structure. You can pretend with your compiler’s version of “pack” with regrettable results.

    You probably have multiple kinds of undefined behavior.

    Dealing with compatibility between versions of your software is awkward at best.

    You might not even get amazing performance. mmap is not a panacea. Page faults and TLB flushing are not free.

    You can’t use any sort of advanced data types — you get exactly what C gives you.

    Forget about enforcing any sort of invariant at the language level.

    • I've written a lot of code using that method, and never had any portability issues. You use types with number of bits in them.

      Hell, I've slung C structs across the network between 3 CPU architectures. And I didn't even use htons!

      Maybe it's not portable to some ancient architecture, but none that I have experienced.

      If there is undefined behavior, it's certainly never been a problem either.

      And I've seen a lot of talk about TLB shootdown, so I tried to reproduce those problems but even with over 32 threads, mmap was still faster than fread into memory in the tests I ran.

      Look, obviously there are use cases for libraries like that, but a lot of the time you just need something simple, and writing some structs to disk can go a long way.

      13 replies →

  • No defined binary encoding, no guarantee about concurrent modifications, performance trade-offs (mmap is NOT always faster than sequential reads!) and more.

  • Because a struct might not serialize the same way from a CPU architecture to another.

    The sizes of ints, the byte order and the padding can be different for instance.

    • C has had fixed size int types since C99. And you've always been able to define struct layouts with perfect precision (struct padding is well defined and deterministic, and you can always use __attribute__(packed) and bit fields for manual padding).

      Endianness might kill your portability in theory. but in practice, nobody uses big endian anymore. Unless you're shipping software for an IBM mainframe, little endian is portable.

      2 replies →