Comment by vFunct
2 days ago
I wish the cameras used film like NASA did for Saturn V. The digital cameras used on these launches basically show a white blob with no detail due to digital cameras having such low dynamic range compared to film. And this is made worse with the night launches that Blue Origin are doing.
In Saturn V launches you could see see detail in the bright flame structures along with background detail.
Maybe some of the upcoming digital cameras chips will have higher dynamic range eventually. I know Nikon has a paper talking about stacked sensors that are trading off high frame rate for high dynamic range: https://youtu.be/jcc1CvqCTeU?si=DuIu4BK48iZTlyB2
> The digital cameras used on these launches basically show a white blob with no detail due to digital cameras having such low dynamic range compared to film.
Film negatives have a dynamic range of between 12 to 15 stops, but a whole bunch can be lost when transferred to optical print (perhaps less if digitally scanned).
The Arri ALEXA Mini LF has 14.5 stops of dynamic range, and the ALEXA 35 has 17 (Table 2):
* https://www.arri.com/resource/blob/295460/e10ff8a5b3abf26c33...
I believe it's possible to get higher than that, this work by kodak for examples shows 20!! stops on film[1]. I seem to remember reading somewhere that for example Kodak TMax 100 can be pushed up to 18 stops, maybe higher. The limitation is not usually the film itself but the development process' used I think?
Its also crucial to note at what SNR they use for their cutoff when stating their dynamic range in stops, in addition to their tone curve.
I'm only a hobbyist though, perhaps someone else can enlighten me further.
Digital is mostly limited by bits, since a 14 bit image with a linear tone curve will have at most 14 stops of info right? So we won't expect to see values pushing higher until camera manufacturers leave behind 14 bit as a standard and go higher, as in the arri cameras. They use a 16 bit sensor, and squeeze the last stop out by using a more gradual tone curve in their shadows. This means technically the shadow stops contain less information than the highlight stops, thus meaning not all stops are equal I believe (quite confusing).
[1]: "Assessing the Quality of Motion Picture Systems from Scene-to-Digital Data" in the February/March 2002 issue of the SMPTE Journal (Volume 111, No. 2, pp. 85-96).
I actually used to design image sensor chips. The dynamic range is due to the electron well size. Each pixel has a diode that stores typically between 10,000-100,000 electrons in them. When the shutter is open, each photon that arrives pushes out an electron across the diode. When the shutter closes, sensors count how many electrons remain. This is how they calculate how much light each pixel received.
The well size itself is usually a function of the pixel size. A larger pixel means a larger diode that can store more electrons, and hence a larger range of light that can be measured - dynamic range.
3 replies →
What we're doing here to get higher SNR, generally, is growing the CMOS sensors larger and larger. The limitation is ultimately either depth of field or IC manufacturing issues. A hypothetical meter-wide sensor could be manufactured to combine with a catadioptric lense of extreme cost, but you'd expect most of a scene to be bokeh, like in macro or microscope lenses.
In reality there are limits imposed by manufacturing. At the extreme, we have wafer-scale sensors used in, eg, night-time wildlife videography - https://www.imdb.com/title/tt11497922/ . Anything larger than that is typically a not-perfectly-contiguous array of smaller chips.
You can also cryocool the camera, at the expense of weight, versatility, and complexity. Most astrophotography is collected with cryocooled CCD or cryocooled CMOS sensors. This helps much more with long exposures than it does with video, but it does help.
> Digital is mostly limited by bits, since a 14 bit image with a linear tone curve will have at most 14 stops of info right?
Bit depth ≠ dynamic range.
The dynaic range is about the highest and lowest value that can be measure ("stops" are a ratio per log_2, db are a ratio per log_10):
* https://en.wikipedia.org/wiki/Dynamic_range#Human_perception
* https://en.wikipedia.org/wiki/Dynamic_range#Photography
The bits are about the gradation with-in that range. You can have 12-stop image recorded using a 10-bit, 12-bit, 14-bit, or 16-bit format.
And at least when it comes to film, it is not a linear curve, at least when you get to the darkest and lightest parts. That's why there's an old saying "expose for the shadows, develop for the highlights"
* https://www.youtube.com/watch?v=rlnt5yFArWo
* https://www.kimhildebrand.com/how-to-use-the-zone-system/
* https://en.wikipedia.org/wiki/Zone_System
> The limitation is not usually the film itself but the development process' used
I respectfully doubt that, development process is a combinaison of techniques that lets you do many thinks with your row data and the line between that and special effects is quite blurry (joke intended).
One way to make HDR-like with films and cheap-not-advanced material is to do a several development of the same film to the same paper, with different exposures parameters. That way you combine different ranges of the image (eg stop 1-4 + stop 4-10 + stop 10-18) to produce you final image. This is a great craft workship.
The only limit is the chemistry of the films used (giving grains at almost nano scale), multiplied by the size of the film.
Side note: development is basically a picture of a picture (usually) done with different chemicals and photographic setup.
20 stops dynamic range is about the human eye's range. Achieving that for digital capture and display would be mind blowing.
3 replies →
> a whole bunch can be lost when transferred to optical print
I’m not sure if by "optical print"[0] you mean a film developing process (like C41), but the info is not lost and stays on the film. The developer job is to fine tune the parameters to print the infos you’re seeking, and that include adjusting white and black points thresholds (range). You can also do several print if you want to extract more infos, and print it so large you see the grain shapes! If there’s is something lost it’s when the picture is taken, after that it’s up to you to exploit it the way you need.
It’s very similar to a numeric device capturing RAWs and the developer finishing the picture on a software like Camera Raw, or what some modern phone does automatically for you.
0 not English native, perhaps this is a synonym of developement?
> I’m not sure if by "optical print"[0] you mean a film developing process (like C41), but the info is not lost and stays on the film.
You have a negative, which you develop.
For photos you then have to transfer that to paper. For cinema you want to distribute it, so you have to take the originally captured image(s) and make copies to distribute.
In both cases, because it's an analog process, and so things will degrade.
Of course if you scan the negative then further copies after are easy to duplicate.
those engineering cameras were not your regular run-of-the mill cameras neither.
NASA published a 45 min documentary of the 10-15 engineering cameras of an STS launch., with comments on the engineering aspets of the launch procedure.
Very beautiful, relaxing, has an almost meditative quality. Highly recommend it.
https://www.youtube.com/watch?v=vFwqZ4qAUkE
Yeah and Shuttle cost a fortune per launch.
Views are distinctly secondary to an affordable launch program.
And the cost of the camera program was paid back a hundred times as that footage was used to diagnose, correct and improve countless systems. Accident investigations would have taken ten times as long without that footage.
2 replies →
I'd assume that BO has plenty of high-res/high-contrast-range imagery - that's just too useful for engineering analysis, post-launch.
What they release to the public is a separate issue.
It could be an exposure issue. Film has a response curve with a big “shoulder” in the highest values. It makes it really hard to blow out highlights.
Digital sensors have a linear response to light, so if the highlights are a bit over a threshold, they are gone.
If you’re willing to tolerate more noise and shoot RAW, you could underexpose, perhaps by as much as 4 stops, and apply a strong curve in post. It would pretty much guarantee no blown out highlights.
Most people find luminance noise aesthetically pleasing up to a point and digital is already much cleaner than film ever was, so it’s a worthy trade off, if you ask me. But “Expose To The Left/Right” is a heated topic among photographers.
Spitballing, but a HDR digital camera could be designed with a beamsplitter similar to that of the 3CCD ( https://en.wikipedia.org/wiki/Three-CCD_camera ) designs that projects to an assembly with only a sensor behind it, another to an assembly that has a 4 stop neutral density and sensor, and another to an assembly that has an 8 stop neutral density and and sensor.
This way it wouldn't suffer from any parallax issues and sensor images should then also line up to allow it to be reconstructed from the multiple sources.
That said... HDR images can be "bland" with it being washed out. It would probably take a bit more post processing work to get the image both high dynamic range and providing the dynamism of what those old Saturn V launches showed.
Back in the day, Fuji had a sensor in which half the pixels had a 2-stop neutral density filter, IIRC. It was a 12MP sensor, with an effective resolution a bit higher than 6MP. It was amazing the amount of highlight you could recover in Adobe Camera Raw and the TIFs/JPGs were beautiful, as it’s usually the case with Fuji.
Alas, it didn’t work out in the market, people weren’t willing to trade half their resolution for more DR, turns out. Also, regular sensors got much wider latitude.
I think what you need is a plain old half-silvered beam splitter, not the 3CCD prism. The dichroic prism uses combinations of coatings to separate RGB into different optical paths. You don't need that.
1 reply →
Even on film, if you expose for the rocket you don't get any flame detail.
https://en.wikipedia.org/wiki/Saturn_V#/media/File:Apollo_11...
What am I missing? I can sit in my den and watch SpaceX and now Blue Origin launches, in real time see the telemetry data, see stage separation, reentry burns, etc, etc. As for Saturn V, to quote the Byrds "I was so much older then..." but I don't recall any of that. Doesn't film require taking the exposed product to a lab for after the fact processing? While the Blue Origin images this morning were not nearly as good as the SpaceX, to me the images are absolutely incredible. I am a serious amateur photographer and I do still shoot film on occasion but I see little dynamic range differences now in 2025.
> talking about stacked sensors
why not just stacked cameras with a range of filters? modern cameras cost and weight nothing (and that rocket puts 45 ton into LEO)
This is footage hours after the event. It's no doubt been stomped on for streaming and other reasons. I wouldn't give up on something better being out there for a little while yet.
You should check out the Artemis I engineering footage.
I don't think it's a technical issue, they probably just don't care, we lost a bit of the magic and ideals we had back then
I think the last thing you could probably say about the teams involved in this is that they don’t care.
We must not have seen the same footage, they even had rain drops on the lens, that was an amateur move in the 60s, in 2025 it should be a crime lol
Like, what the fuck is this ? https://imgur.com/6sVSXGd Did they strap an iphone 12 on the launchpad and leave it on auto settings ? The flood light is aimed straight at the lens too...
This is an amateur shot from 5 years ago: https://www.reddit.com/r/space/comments/coppj8/a_dramatic_cl...
I'm telling you, they didn't care one bit about that video, it looks like absolute ass
1 reply →