No, it isn't. Absolute luminance is a "feature" of PQ specifically used by HDR10(+) and most DolbyVision content (notably the DolbyVision as produced by an iPhone is not PQ, it's not "real" DolbyVision). But this is not the only form of HDR and it's not even the common form for phone cameras. HLG is a lot more popular for cameras and it is not in absolute luminance. The gainmap-based approach that Google, Apple, and Adobe are all using is also very much not absolute luminance, either. In fact that flips it entirely and it's SDR relative instead, which is a much better approach to HDR than what video initially went with.
Ideally in the abstract it could be just that, but in practice it's an umbrella name for many different techniques that provide some aspect of that goal.
Not in the more general sense! It can refer to what its acronym spells out directly: Bigger range between dimmest and brightest capabilities of a display, imaging technique etc.
No. HDR can encode high dynamic range because (typically) it uses floating point encoding.
From a technical point of view, HDR is just a set of standards and formats for encoding absolute-luminance scene-referred images and video, along with a set of standards for reproduction.
No. HDR video (and images) don't use floating point encoding. They generally use a higher bit depth (10 bits or more vs 8 bits) to reduce banding and different transfer characteristics (i.e. PQ or HLG vs sRGB or BT.709), in addition to different YCbCr matrices and mastering metadata.
And no, it's not necessarily absolute luminance. PQ is absolute, HLG is not.
I think most HDR formats do not typically use 32 bit floating point. The first HDR file format I can remember is Greg Ward’s RGBE format, which is also now more commonly known as .HDR and I think is pretty widely used.
It uses a type of floating point, in a way, but it’s a shared 8 bit exponent across all 3 channels, and the channels are still 8 bits each, so the whole thing fits in 32 bits. Even the .txt file description says it’s not “floating point” per-se since that implies IEEE single precision floats.
Cameras and displays don’t typically use floats, and even CG people working in HDR and using, e.g., OpenEXR, might use half floats more often that float.
Some standards do exist, and it’s improving over time, but the ideas and execution of HDR in various ways preceded any standards, so I think it’s not helpful to define HDR as a set of standards. From my perspective working in CG, HDR began as a way to break away from 8 bits per channel RGB, and it included improving both color range and color resolution, and started the discussion of using physical metrics as opposed to relative [0..1] ranges.
No, it isn't. Absolute luminance is a "feature" of PQ specifically used by HDR10(+) and most DolbyVision content (notably the DolbyVision as produced by an iPhone is not PQ, it's not "real" DolbyVision). But this is not the only form of HDR and it's not even the common form for phone cameras. HLG is a lot more popular for cameras and it is not in absolute luminance. The gainmap-based approach that Google, Apple, and Adobe are all using is also very much not absolute luminance, either. In fact that flips it entirely and it's SDR relative instead, which is a much better approach to HDR than what video initially went with.
Ideally in the abstract it could be just that, but in practice it's an umbrella name for many different techniques that provide some aspect of that goal.
Not in the more general sense! It can refer to what its acronym spells out directly: Bigger range between dimmest and brightest capabilities of a display, imaging technique etc.
No. HDR can encode high dynamic range because (typically) it uses floating point encoding.
From a technical point of view, HDR is just a set of standards and formats for encoding absolute-luminance scene-referred images and video, along with a set of standards for reproduction.
No. HDR video (and images) don't use floating point encoding. They generally use a higher bit depth (10 bits or more vs 8 bits) to reduce banding and different transfer characteristics (i.e. PQ or HLG vs sRGB or BT.709), in addition to different YCbCr matrices and mastering metadata.
And no, it's not necessarily absolute luminance. PQ is absolute, HLG is not.
3 replies →
I think most HDR formats do not typically use 32 bit floating point. The first HDR file format I can remember is Greg Ward’s RGBE format, which is also now more commonly known as .HDR and I think is pretty widely used.
https://www.graphics.cornell.edu/~bjw/rgbe.html
It uses a type of floating point, in a way, but it’s a shared 8 bit exponent across all 3 channels, and the channels are still 8 bits each, so the whole thing fits in 32 bits. Even the .txt file description says it’s not “floating point” per-se since that implies IEEE single precision floats.
Cameras and displays don’t typically use floats, and even CG people working in HDR and using, e.g., OpenEXR, might use half floats more often that float.
Some standards do exist, and it’s improving over time, but the ideas and execution of HDR in various ways preceded any standards, so I think it’s not helpful to define HDR as a set of standards. From my perspective working in CG, HDR began as a way to break away from 8 bits per channel RGB, and it included improving both color range and color resolution, and started the discussion of using physical metrics as opposed to relative [0..1] ranges.