Comment by kevin_thibedeau
1 day ago
Systems programmers love to hate on unsigned integers. Generations have been infected with the Java world model that integers have to be pretend number lines centered on zero. Guess what, you still have boundary conditions to deal with. There are times when you really really need to use the full word range without negative values. This happens more often with low level programming and machines with small word sizes, something fewer people are engaged in. It doesn't need to be the default. Ada has them sequestered as modular types but it's available to use when needed.
Java doesn't have unsigned as primitive types, because James Gosling did a series of interviews at Sun among "expert" C devs, and all got the C language rules for unsigned arithmetic wrong.
Yes I miss them in Java as primitives, however there are utility methods for unsigned arithmetic, that get it right.
The way he conducted those interviews, and the conclusions he drew from them, may have been flawed. Because the situation now is that C has unsigned types and Java mostly has not.
And despite all pitfalls especially around mixing signed and unsigned in C, unsigned types are very useful, I'd in fact say that for low-level programming they are essential.
Doesn't seem to affect the extent Java is used across the industry, including many workloads that in the last century companies would use C instead.
Books like Yourdon Structured Method were mainly targeted to business C back in the day.
Java has char as an unsigned 16-bit integer type. They should have made byte unsigned as well.
Usually you don't do arithmetic with char in Java, this isn't C culture of anything goes.
3 replies →
Having them available is not the issue, using them for sizes and indices is what causes a lot of tricky bugs.
I find it the opposite. Unsigned integers are intuitive, while signed integers are unintuitive and cause a lot of tricky bugs. Especially in languages, where signed overflow is undefined behavior.
It's pretty rare to have values that can be negative but are always integers. At least in the work I do. The most common case I encounter are approximations of something related to log probability. Such as various scores in dynamic programming and graph algorithms.
Most of the time, when you deal with integers, you need special handling to avoid negative values. Once you get used to thinking about unsigned integers, you quickly develop robust ways of avoiding situations where the values would be negative.
It is interesting that you find unsigned integers more intuitive. My experience (also with students, but also analysis of CVE give plenty of evidence) is that the opposite is true: signed integers in C are a model of integers which have a nice mathematical structure which people learn in elementary school. Yes, this breaks down on overflow, but for this you have to reach very high numbers and there is very good tooling to debug this. In contrast, unsigned integers in C are modulo arithmetic which people learn at university, if at all, and get wrong all the time, and the errors are mostly subtle and very difficult to find automatically.
You are right that often you need to constrain an integer to be non-negative or positive, but usually not during arithmetic, but at certain points in the logic of a program. But then in my experience it is better expressed as some assertion.
6 replies →
Why does an unsigned type for sizes or indices fare worse than a signed type? When do I want the -247th element in an array? When do I have a block that is -10 bytes in size?
Because doing subtraction on sizes/indicies is common, and signed handles the case where you subtract below 0. Unsigned yields unintuitive results. i.e, unsigned fails silently. For example, looping to the 2nd to last item in an array or getting the index before the given index.
The source of confusion is that unsigned is a terrible name. Unsigned does not mean non-negative. Its 100% complete valid to assign a negative value to an unsigned, it just fails silently.
If you want non-negative integers, then you should make a wrapper class that enforces non-negativity at compile and runtime.
4 replies →
the reason is not that you want a negative index or size, but that you want the computation of the index to be correct, and you want to have obvious errors. Both turns out to be easier with signed types.
There are (rare) times when you want negative array indices. C lets you index in both directions from a pointer to the middle of an array. That's why array indexing is signed in C. Some libc ctypes lookup tables do this. For sizing there is no strong case for negatives other than to shoehorn them into signed operations.
4 replies →
> When do I want the -247th element in an array?
You never want any element of an array, except elements within the range [0, array_length). Anything outside of that is undefined behavior.
I think people tend to overthink this. A function which takes an index argument, should simply return a result when the index is within the valid range, and error if it's outside of it (regardless of whether it's outside by being too low or too high). It doesn't particularly matter that the integer is signed.
If you aren't storing 2^64 elements in your array (which you probably aren't - most systems don't even support addressing that much memory) then the only thing unsigned gets you is a bunch of footguns (like those described in the OP article).
In Java, unsigned arithmetic is available through an API and, as you said, it is pretty much only needed when marshalling to certain wire protocols or for FFI. Built-in unsigned types are useful primarily for bitfields or similar tiny types with up to 6 bits or so.
I miss them for doing bit juggling like file headers or networking packets.
However I do concede writing a few helper methods isn't that much of a burden.
I think all the unsigned arithmetic you need is already offered. Unsigned shift right is an operator; the primitive wrappers offer compareUnsigned, divideUnsigned, and remainderUnsigned, as well as conversion methods; unsigned exponentiation is offered in Math (because signed types in Java wrap, there's no need for special unsigned addition/subtraction).
> Systems programmers love to hate on unsigned integers
I don't see this hate in Rust. I think this is a big thing in the C-related languages, and that the author has chosen to pretend that's the same for any "systems language" but it is not.
> There are times when you really really need to use the full word range without negative values.
There are a few of those, but that is the niche case. Certainly when we're talking about 64-bit size types. And if you want to cater to smaller size types, then just just template over the size type. Or, OK, some other trick if it's C rather than C++.
Sometimes (and very often in some scenarios/industries, i.e. HPC for graphics and simulation with indices for things like points, vertices, primvars, voxels, etc) you want pretty good efficiency of the size of the datatype as well for memory / cache performance reasons, because you're storing millions of them, and need to be random addressing (so can't really bit-pack to say 36 bytes, at least without overhead away from native types, which are really needed for maximum speed without any branching).
Losing half the range to make them signed when you only care about positive values 95% of the time (and in the rare case when you do any modulo on top of them you can cast, or write wrappers for that), is just a bad trade-off.
Yes, you've still then only doubled the range to 2^32, and you'll still hit it at some point, but that extra byte can make a lot of difference from a memory/cache efficiency standpoint without jumping to 64-bit.
So very often uint32_t is a very good sweet spot for size: int32_t is sometimes too small, and (u)int64_t is generally not needed and too wasteful.
As I generally believed in Moore's law, i.e., accepted the notion that transistors were exponential, I was surprised at how long the difference between a 2 GiB address space and a 3 GiB address space was relevant in practice.
In theory, it should have been at most a year. In practice, Windows XP /3GB boot switch (allocates 3 GB of virtual address space user mode and 1 GiB for the kernel instead of the usual 2 and 2) was relevant for many years.
1 reply →
> HPC for graphics and simulation with indices
Those are not sizes of data structures.
> Losing half the range
It's not a part of the range of sizes they can use, with any typical data structure.
> Losing half the range to make them signed when you only care about positive values 95% of the time is just a bad trade-off.
It's the right choice sizes in the standard library (in C++) or standard-ish/popular libraries in C. And - again, it's the wrong type. For example, even if you only care about positive values, their difference is not necessarily positive.