Comment by unnah
1 day ago
Since such algorithms were developed in the 1990's, nowadays you can expect your language's standard library to use them for float-to-decimal and decimal-to-float conversions. So all you need to do in code is to print the float without any special formatting instructions, and you'll get the shortest unique decimal representation.
Except that C specifies that floating point numbers should be printed in a fixed precision (6 decimal digits) when no precision is given. Internally they do use some sort of float-to-decimal algorithms [1], but you can't get the shortest representation out of them.
[1] Some (e.g. Windows CRT) do use the shortest representation as a basis, in which case you can actually extract it with large enough precision (where all subsequent digits will be zeros). But many libcs print the exact representation instead (e.g. 3.140000000000000124344978758017532527446746826171875 for `printf("%.51f", 3.14)`), so they are useless for our purpose.
Ok, you got me there. Looks like they fixed that in C++17 with std::to_chars. https://en.cppreference.com/w/cpp/utility/to_chars.html
That's what the %g format specifier is for.
This is a common misconception. Quoting the ISO C (actually the final draft of C23, but should be enough for this purpose):
> g, G: A double argument representing a floating-point number is converted in style f or e (or in style F or E in the case of a G conversion specifier), depending on the value converted and the precision. Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero. Then, if a conversion with style E would have an exponent of X: if P > X ≥ −4, the conversion is with style f (or F) and precision P − (X + 1). otherwise, the conversion is with style e (or E) and precision P − 1.
Note that it doesn't say anything about, say, the inherent precision of number. It is a simple remapping to %f or %e depending on the precision value.
2 replies →