Comment by cobbal
3 months ago
Little endian does appear strange at first, but if you consider the motivation it makes a lot of sense.
Little endian's most valuable property is that an integer stored at an address has common layout no matter the width of the integer. If I store an i32 at 0x100, and then load an i16 at 0x100, that's the same as casting (with wrapping) an i32 to an i16 because the "ones digit" (more accurately the "ones byte") is stored at the same place for both integers.
Since bits aren't addressable, they don't really have an order in memory. The only way to access bits is by loading them into a register, and registers don't meaningfully have an endianness.
> Since bits aren't addressable, they don't really have an order in memory.
Bits aren't addressable in the dominant ISAs today, but they were addressable by popular ISAs in the past, such as the PDP-10 family.
The PDP-10 is one of the big reasons why network byte order is big-endian.
That said, I forget whether the PDP-10 was big-endian or little-endian wrt bits.
I'm not sure I've ever seen that actually come in to play. Little Endian is obviously the best Endian, but I don't think that argument really makes sense.
The most obvious argument is that little Endian is clearly the most natural order - the only reason to use Big Endian is to match the stupid human history of mixing LTR text with RTL numbers.
I've seen one real technical reason to prefer little endian (can't remember what it was tbh but it was fairly niche) and I've never seen any technical reasons to prefer big endian ("it's easier to read in a hex editor" doesn't count).
It depends on the application. Big Endian is pretty good for networking and sorting. If you store the address in Big Endian, you can start doing streaming prefix matching, because the most significant address byte is arriving first. When you consider how many routers and switches a packet has to cross, any buffering or Endian conversion is going to increase latency.