← Back to context

Comment by jltsiren

7 hours ago

I work in bioinformatics. The numbers are typically large enough that you either have to think about numeric limits all the time if you use 32-bit integers (or bit-packed arrays), or you end up wasting (tens of) gigabytes with 64-bit integers.

I've also done a lot of succinct data structures, data compression, and things like that. When you manipulate the binary representation directly, it's easier to connect representation to unsigned semantics than to signed semantics.

Unsigned integers are usually integers modulo 2^n, which gives them a convenient algebraic structure. Whether you find that intuitive or not probably depends on your education. From my perspective, abstract algebra and discrete mathematics are things you learn in the first year of your CS degree.

Signed ints are also integers modulo 2^n, as concerns +, -, and *. Both unsigned and signed ints have the exact same modular arithmetic structure, for +, -, and *. It is only for other operations (ordering comparisons, or / and %) where they differ, and on these operations, neither signed nor unsigned ints have any convenient algebraic structure commonly encountered elsewhere.