← Back to context

Comment by irae

5 hours ago

I believe you are speculating on digital mastering and not codec conversion.

From the creator's PoV their intention and quality is defined in post-production and mastering, color grading and other stuff I am not expert on. But I know a bit more from music mastering and you might be thinking of a workflow similar to Apple, which allows creators to master for their codec with "Mastered for iTuenes" flow, where the creators opt-in to an extra step to increase quality of the encoding and can hear in their studio the final quality after Apple encodes and DRMs the content on their servers.

In video I would assume that is much more complicated, since there are many quality the video is encoded to allow for slower connections and buffering without interruptions. So I assume the best strategy is the one you mentioned yourself, where AV1 obviously detects on a per scene or keyframe interval the grain level/type/characteristics and encode as to be accurate to the source material at this scene.

In other words: The artist/director preference for grain is already per scene and expressed in the high bitrate/low-compression format they provide to Netflix and competitors. I find it unlikely that any encoder flags would specifically benefit the encoding workflow in the way you suggested it might.

"I believe you are speculating on digital mastering and not codec conversion."

That's good, since that's what I said.

"The artist/director preference for grain is already per scene and expressed in the high bitrate/low-compression format they provide to Netflix and competitors. I find it unlikely that any encoder flags would specifically benefit the encoding workflow in the way you suggested it might."

I'm not sure you absorbed the process described in the article. Netflix is analyzing the "preference for grain" as expressed by the grain detected in the footage, and then they're preparing a "grain track," as a stream of metadata that controls a grain "generator" upon delivery to the viewer. So I don't know why you think this pipeline wouldn't benefit from having the creator provide perfectly accurate grain metadata to the delivery network along with already-clean footage up front; this would eliminate the steps of analyzing the footage and (potentially lossily) removing fake grain... only to re-add an approximation of it later.

All I'm proposing is a mastering tool that lets the DIRECTOR (not an automated process) do the "grain analysis" deliberately and provide the result to the distributor.