← Back to context

Comment by throw0101a

2 days ago

> The digital cameras used on these launches basically show a white blob with no detail due to digital cameras having such low dynamic range compared to film.

Film negatives have a dynamic range of between 12 to 15 stops, but a whole bunch can be lost when transferred to optical print (perhaps less if digitally scanned).

The Arri ALEXA Mini LF has 14.5 stops of dynamic range, and the ALEXA 35 has 17 (Table 2):

* https://www.arri.com/resource/blob/295460/e10ff8a5b3abf26c33...

I believe it's possible to get higher than that, this work by kodak for examples shows 20!! stops on film[1]. I seem to remember reading somewhere that for example Kodak TMax 100 can be pushed up to 18 stops, maybe higher. The limitation is not usually the film itself but the development process' used I think?

Its also crucial to note at what SNR they use for their cutoff when stating their dynamic range in stops, in addition to their tone curve.

I'm only a hobbyist though, perhaps someone else can enlighten me further.

Digital is mostly limited by bits, since a 14 bit image with a linear tone curve will have at most 14 stops of info right? So we won't expect to see values pushing higher until camera manufacturers leave behind 14 bit as a standard and go higher, as in the arri cameras. They use a 16 bit sensor, and squeeze the last stop out by using a more gradual tone curve in their shadows. This means technically the shadow stops contain less information than the highlight stops, thus meaning not all stops are equal I believe (quite confusing).

[1]: "Assessing the Quality of Motion Picture Systems from Scene-to-Digital Data" in the February/March 2002 issue of the SMPTE Journal (Volume 111, No. 2, pp. 85-96).

  • I actually used to design image sensor chips. The dynamic range is due to the electron well size. Each pixel has a diode that stores typically between 10,000-100,000 electrons in them. When the shutter is open, each photon that arrives pushes out an electron across the diode. When the shutter closes, sensors count how many electrons remain. This is how they calculate how much light each pixel received.

    The well size itself is usually a function of the pixel size. A larger pixel means a larger diode that can store more electrons, and hence a larger range of light that can be measured - dynamic range.

  • What we're doing here to get higher SNR, generally, is growing the CMOS sensors larger and larger. The limitation is ultimately either depth of field or IC manufacturing issues. A hypothetical meter-wide sensor could be manufactured to combine with a catadioptric lense of extreme cost, but you'd expect most of a scene to be bokeh, like in macro or microscope lenses.

    In reality there are limits imposed by manufacturing. At the extreme, we have wafer-scale sensors used in, eg, night-time wildlife videography - https://www.imdb.com/title/tt11497922/ . Anything larger than that is typically a not-perfectly-contiguous array of smaller chips.

    You can also cryocool the camera, at the expense of weight, versatility, and complexity. Most astrophotography is collected with cryocooled CCD or cryocooled CMOS sensors. This helps much more with long exposures than it does with video, but it does help.

  • > Digital is mostly limited by bits, since a 14 bit image with a linear tone curve will have at most 14 stops of info right?

    Bit depth ≠ dynamic range.

    The dynaic range is about the highest and lowest value that can be measure ("stops" are a ratio per log_2, db are a ratio per log_10):

    * https://en.wikipedia.org/wiki/Dynamic_range#Human_perception

    * https://en.wikipedia.org/wiki/Dynamic_range#Photography

    The bits are about the gradation with-in that range. You can have 12-stop image recorded using a 10-bit, 12-bit, 14-bit, or 16-bit format.

    And at least when it comes to film, it is not a linear curve, at least when you get to the darkest and lightest parts. That's why there's an old saying "expose for the shadows, develop for the highlights"

    * https://www.youtube.com/watch?v=rlnt5yFArWo

    * https://www.kimhildebrand.com/how-to-use-the-zone-system/

    * https://en.wikipedia.org/wiki/Zone_System

  • > The limitation is not usually the film itself but the development process' used

    I respectfully doubt that, development process is a combinaison of techniques that lets you do many thinks with your row data and the line between that and special effects is quite blurry (joke intended).

    One way to make HDR-like with films and cheap-not-advanced material is to do a several development of the same film to the same paper, with different exposures parameters. That way you combine different ranges of the image (eg stop 1-4 + stop 4-10 + stop 10-18) to produce you final image. This is a great craft workship.

    The only limit is the chemistry of the films used (giving grains at almost nano scale), multiplied by the size of the film.

    Side note: development is basically a picture of a picture (usually) done with different chemicals and photographic setup.

  • 20 stops dynamic range is about the human eye's range. Achieving that for digital capture and display would be mind blowing.

> a whole bunch can be lost when transferred to optical print

I’m not sure if by "optical print"[0] you mean a film developing process (like C41), but the info is not lost and stays on the film. The developer job is to fine tune the parameters to print the infos you’re seeking, and that include adjusting white and black points thresholds (range). You can also do several print if you want to extract more infos, and print it so large you see the grain shapes! If there’s is something lost it’s when the picture is taken, after that it’s up to you to exploit it the way you need.

It’s very similar to a numeric device capturing RAWs and the developer finishing the picture on a software like Camera Raw, or what some modern phone does automatically for you.

0 not English native, perhaps this is a synonym of developement?

  • > I’m not sure if by "optical print"[0] you mean a film developing process (like C41), but the info is not lost and stays on the film.

    You have a negative, which you develop.

    For photos you then have to transfer that to paper. For cinema you want to distribute it, so you have to take the originally captured image(s) and make copies to distribute.

    In both cases, because it's an analog process, and so things will degrade.

    Of course if you scan the negative then further copies after are easy to duplicate.