Comment by 4ad

2 days ago

No. HDR can encode high dynamic range because (typically) it uses floating point encoding.

From a technical point of view, HDR is just a set of standards and formats for encoding absolute-luminance scene-referred images and video, along with a set of standards for reproduction.

No. HDR video (and images) don't use floating point encoding. They generally use a higher bit depth (10 bits or more vs 8 bits) to reduce banding and different transfer characteristics (i.e. PQ or HLG vs sRGB or BT.709), in addition to different YCbCr matrices and mastering metadata.

And no, it's not necessarily absolute luminance. PQ is absolute, HLG is not.

  • Isn’t HLG using floating point(s)?

    Also DCI-P3 should fit in here somewhere, as it seems to be the most standardized color space for HDR. I would share more insight, if I had it. I thought I understood color profiles well, but I have encountered some challenges when trying to display in one, edit in another, and print “correctly”. And every device seems to treat color profiles a little bit differently.

    • > Isn’t HLG using floating point(s)?

      All transfer functions can generally work on either integer range or floating point. They basically just describe a curve shape, and you can have that curve be over the range of 0.0-1.0 just as easily as you can over 0-255 or 0-1023.

      Extended sRGB is about the only thing that basically requires floating point, as it specifically describes 0.0-1.0 as being equivalent to sRGB and then has a valid range larger than that (you end up with something like -.8 to 2.4 or greater). And representing that in integer domain is conceptually possible but practically not really.

      > Also DCI-P3 should fit in here somewhere, as it seems to be the most standardized color space for HDR.

      BT2020 is the most standardized color space for HDR. DCI-P3 is the most common color gamut of HDR displays that you can actually afford, however, but that's a smaller gamut than what most HDR profiles expect (HDR10, HDR10+, and "professional" DolbyVision are all BT2020 - a wider gamut than P3). Which also means most HDR content specifies a color gamut it doesn't actually benefit from having as all that HDR content is still authored to only use somewhere between the sRGB and DCI-P3 gamut since that's all anyone who views it will actually have.

    • You can read the actual HLG spec here: https://www.arib.or.jp/english/html/overview/doc/2-STD-B67v2...

      The math uses real numbers but table 2-4 ("Digital representation") discusses how the signal is quantized to/from analog and digital. The signal is quantized to integers.

      This same quantization process is done for sRGB, BT.709, BT.2020, etc. so it's not unique to HLG. It's just how digital images/video are stored.

I think most HDR formats do not typically use 32 bit floating point. The first HDR file format I can remember is Greg Ward’s RGBE format, which is also now more commonly known as .HDR and I think is pretty widely used.

https://www.graphics.cornell.edu/~bjw/rgbe.html

It uses a type of floating point, in a way, but it’s a shared 8 bit exponent across all 3 channels, and the channels are still 8 bits each, so the whole thing fits in 32 bits. Even the .txt file description says it’s not “floating point” per-se since that implies IEEE single precision floats.

Cameras and displays don’t typically use floats, and even CG people working in HDR and using, e.g., OpenEXR, might use half floats more often that float.

Some standards do exist, and it’s improving over time, but the ideas and execution of HDR in various ways preceded any standards, so I think it’s not helpful to define HDR as a set of standards. From my perspective working in CG, HDR began as a way to break away from 8 bits per channel RGB, and it included improving both color range and color resolution, and started the discussion of using physical metrics as opposed to relative [0..1] ranges.