← Back to context

Comment by strogonoff

5 years ago

Lately an area of interest of mine in digital photography in particular is scene-referred vs. output-referred image formats and subjective perception. The author (justifiably) walks around this topic in his camera sensor overview, but the data captured by a sensor from photons hitting it doesn’t really contain a readily viewable image.

Data values captured by a modern camera far exceed the ranges that can be reproduced by output media such as paper or screens—meaning that the only way[0] to obtain an image we can communicate to someone else (or future ourselves) as a static artifact[0] is by throwing out data and conforming values to ranges that fit in the output color space, converting scene-referred (“raw”) data to output-referred.

This is where subjective perception comes in: how we perceive colors and shapes in a given scene depends a lot on what we had seen prior to this scene, our general mood, and various other aspects of our mind state. It’s only by taking control of processing scene-referred data that we can use the full range of captured values to try to convey most convincingly, within the constraints of the output space, our subjective perception of the scene around the time we pressed the shutter trigger.

(Naturally, further down this rabbit hole come the questions about e.g. what our conscious perception—but not the camera—was blind to in the scene, and eventually about the nature of reality itself, at which point one may feel compelled to give up and go into painting instead.)

[0] This would be quite niche, but I wonder if we could develop tools for exploring raw data at viewing stage, allowing the audience to effortlessly adjust their perception of the scene (even if within ranges specified by the photographer). Such exploration would require significant computing powers, but we’ll probably be there in a couple of years.

If I understand correctly, you're looking for a simplified RAW image editor? Many digital cameras allow storing RAW images alongside JPEG. The viewer can then load the RAW images into any (web-based) image viewer/editor that supports RAW format and have full control over tone mapping.

The tool interface needs to be simplified to make it a better fit for the use-case you present but I don't see computing power as a bottleneck.

Of course they'd still be limited by the dynamic range of the camera. This can also be resolved by calculating irradiance map based on multiple RAW images taken with different exposure times.

  • I think there is a fundamental difference between editor (designed to produce a deliverable) and viewer (designed for immediate experience) software. One of the things essential to the latter but really to the former is immediacy, hence I suspect that the computing powers commonly available today make it impossible for now (but likely not for much longer).

    • (Edit: “not really to the former”)

      Apart from performance, another crucial thing is that the viewer must not have to think about technical aspects (like exposure, color profile, etc.), as you noted, so the GUI would have to be radically different.

      I am envisioning producers bundling N processing profiles with their “digital negative”, and software that somehow allows the user to fluidly explore the perception of the scene by interpolating inside an (N-1)-dimensional space bounded by parameters in those profiles with really low latency.

To clarify, the alternative to taking control of processing scene-referred data is to use camera’s JPEG rendering of its own idea of how the scene should look.