← Back to context

Comment by Sharlin

1 day ago

That's what the %g format specifier is for.

  printf("%f\n", 3.14); // 3.140000
  printf("%g\n", 3.14); // 3.14

This is a common misconception. Quoting the ISO C (actually the final draft of C23, but should be enough for this purpose):

> g, G: A double argument representing a floating-point number is converted in style f or e (or in style F or E in the case of a G conversion specifier), depending on the value converted and the precision. Let P equal the precision if nonzero, 6 if the precision is omitted, or 1 if the precision is zero. Then, if a conversion with style E would have an exponent of X: if P > X ≥ −4, the conversion is with style f (or F) and precision P − (X + 1). otherwise, the conversion is with style e (or E) and precision P − 1.

Note that it doesn't say anything about, say, the inherent precision of number. It is a simple remapping to %f or %e depending on the precision value.

  • Hmm, is that then just an extension that's de facto standard? Every compiler I tried at godbolt.org prints 3.140000 and 3.14 respectively.

    • 3.14 is the correct answer for both %g and the shortest possible representation. Try 1.0 / 3.0 instead; it won't show all digits required for round-tripping.