← Back to context

Comment by crazygringo

15 hours ago

You just add it to your original footage, and accept whatever quality degradation that grain inherently provides.

Any movie or TV show is ultimately going to be streamed in lots of different formats. And when grain is added, it's often on a per-shot basis, not uniformly. E.g. flashback scenes will have more grain. Or darker scenes will have more grain added to emulate film.

Trying to tie it to the particular codec would be a crazy headache. For a solo project it could be doable but I can't ever imagine a streamer building a source material pipeline that would handle that.

Mmmm, no, because if the delivery conduit uses AV1, you can optimize for it and realize better quality by avoiding the whole degrading round of grain analysis and stripping.

"I can't ever imagine a streamer building a source material pipeline that would handle that."

That's exactly what the article describes, though. It's already built, and Netflix is championing this delivery mechanism. Netflix is also famous for dictating technical requirements for source material. Why would they not want the director to be able to provide a delivery-ready master that skips the whole grain-analysis/grain-removal step and provides the best possible image quality?

Presumably the grain extraction/re-adding mechanism described here handles variable grain throughout the program. I don't know why you'd assume that it doesn't. If it didn't, you'd wind up with a single grain level for the entire movie; an entirely unacceptable result for the very reason you mention.

This scheme loses a major opportunity for new productions unless the director can provide a clean master and an accompanying "grain track." Call it a GDL: grain decision list.

This would also be future-proof; if a new codec is devised that also supports this grain layer, the parameters could be translated from the previous master into the new codec. I wish Netflix could go back and remove the hideous soft-focus filtration from The West Wing, but nope; that's baked into the footage forever.

  • I believe you are speculating on digital mastering and not codec conversion.

    From the creator's PoV their intention and quality is defined in post-production and mastering, color grading and other stuff I am not expert on. But I know a bit more from music mastering and you might be thinking of a workflow similar to Apple, which allows creators to master for their codec with "Mastered for iTuenes" flow, where the creators opt-in to an extra step to increase quality of the encoding and can hear in their studio the final quality after Apple encodes and DRMs the content on their servers.

    In video I would assume that is much more complicated, since there are many quality the video is encoded to allow for slower connections and buffering without interruptions. So I assume the best strategy is the one you mentioned yourself, where AV1 obviously detects on a per scene or keyframe interval the grain level/type/characteristics and encode as to be accurate to the source material at this scene.

    In other words: The artist/director preference for grain is already per scene and expressed in the high bitrate/low-compression format they provide to Netflix and competitors. I find it unlikely that any encoder flags would specifically benefit the encoding workflow in the way you suggested it might.

  • You're misunderstanding.

    > if the delivery conduit uses AV1, you can optimize for it

    You could, in theory, as I confirmed.

    > It's already built, and Netflix is championing this delivery mechanism.

    No it's not. AV1 encoding is already built. Not a pipeline where source files come without noise but with noise metadata.

    > and provides the best possible image quality?

    The difference in quality is not particularly meaningful. Advanced noise-reduction algorithms already average out pixel values across many frames to recover a noise-free version that is quite accurate (including accounting for motion), and when the motion/change is so overwhelming that this doesn't work, it's too fast for the eye to be perceiving that level of detail anyways.

    > This scheme loses a major opportunity for new productions unless the director can provide a clean master and an accompanying "grain track."

    Right, that's what you're proposing. But it doesn't exist. And it's probably never going to exist, for good reason.

    Production houses generally provide digital masters in IMF format (which is basically JPEG2000), or sometimes ProRes. At a technical level, a grain track could be invented. But it basically flies in the face of the idea that the pixel data itself is the final "master". In the same way, color grading and vector graphics aren't provided as metadata either, even though they could be in theory.

    Once you get away from the idea that the source pixels are the ultimate source of truth and put additional postprocessing into metadata, it opens up a whole can of worms where different streamers interpret the metadata differently, like some streamers might choose to never add noise and so the shows look different and no longer reflect the creator's intent.

    So it's almost less of a technical question and more of a philosophical question about what represents the finished product. And the industry has long decided that the finished product is the pixels themselves, not layers and effects that still need to be composited.

    > I wish Netflix could go back and remove the hideous soft-focus filtration from The West Wing, but nope; that's baked into the footage forever.

    In case you're not aware, it's not a postproduction filter -- the soft focus was done with diffusion filters on the cameras themselves, as well as choice of film stock. And that was the creative intent at the time. Trying to "remove" it would be like trying to pretend it wasn't the late-90's network drama that it was.