← Back to context

Comment by janeway

3 months ago

This topic is fascinating to me. The Toy Story film workflow is a perfect illustration of intentional compensation: artists pushed greens in the digital master because 35 mm film would darken and desaturate them. The aim was never neon greens on screen, it was colour calibration for a later step. Only later, when digital masters were reused without the film stage, did those compensating choices start to look like creative ones.

I run into this same failure mode often. We introduce purposeful scaffolding in the workflow that isn’t meant to stand alone, but exists solely to ensure the final output behaves as intended. Months later, someone is pitching how we should “lean into the bold saturated greens,” not realising the topic only exists because we specifically wanted neutral greens in the final output. The scaffold becomes the building.

In our work this kind of nuance isn’t optional, it is the project. If we lose track of which decisions are compensations and which are targets, outcomes drift badly and quietly, and everything built after is optimised for the wrong goal.

I’d genuinely value advice on preventing this. Is there a good name or framework for this pattern? Something concise that distinguishes a process artefact from product intent, and helps teams course-correct early without sounding like a semantics debate?

I worked at DreamWorks Animation on the pipeline, lighting and animation tools for almost ten years. All of this information is captured in our pipeline process tools, although I am sure there are edits and modifications that are done that escape documentation. We were able to pull complete shows out of deep storage, render scenes using the toolchain the produced them and produce the same output. If the renders weren't reproducable, madness would ensue.

Even with complete attention to detail, the final renders would be color graded using Flame, or Inferno, or some other tool and all of those edits would also be stored and reproducible in the pipeline.

Pixar must have a very similar system and maybe a Pixar engineer can comment. My somewhat educated assumption is that these DVD releases were created outside of the Pixar toolchain by grabbing some version of a render that was never intended as a direct to digital release. This may have happened as a result of ignorance, indifference, a lack of a proper budget or some other extenuating circumstance. It isn't likely John Lasseter or some other Pixar creative really wanted the final output to look like this.

  • Amazing. Your final point seems to make most sense - not the original team itself having any problems.

There’s an analog analogue: mixing and mastering audio recordings for the devices of the era.

I first heard about this when reading an article or book about Jimi Hendrix making choices based on what the output sounded like on AM radio. Contrast that with the contemporary recordings of The Beatles, in which George Martin was oriented toward what sounded best in the studio and home hi-fi (which was pretty amazing if you could afford decent German and Japanese components).

Even today, after digital transfers and remasters and high-end speakers and headphones, Hendrix’s late 60s studio recordings don’t hold a candle anything the Beatles did from Revolver on.

  • > There’s an analog analogue: mixing and mastering audio recordings for the devices of the era.

    In the modern day, this has one extremely noticeable effect: audio releases used to assume that you were going to play your music on a big, expensive stereo system, and they tried to create the illusion of the different members of the band standing in different places.

    But today you listen to music on headphones, and it's very weird to have, for example, the bassline playing in one ear while the rest of the music plays in your other ear.

    • That's with a naive stereo split. Many would still put the bass on one side, with the binaural processing so it's still heard on the right, but quieter and with a tiny delay.

      3 replies →

    • No, they just didn't put much time into stereo because it was new and most listeners didn't have that format. So they'd hard pan things for the novelty effect. This paradigm was over by the early 70s and they gave stereo mixes a more intentional treatment.

  • A voice on the radio sounded better with vibrato, so that’s what they did before even recordings were made. Same when violins played.

    These versions were for radio only and thought of as cheap when done in person.

    Later this was recorded, and being the only versions recorded, later generations thought that this is how the masters of the time did things, when really they would be booed off stage (so to speak).

    It’s a bit of family history that passed this info on due to being multiple generations of playing the violin.

  • And now we have the Loudness War where the songs are so highly compressed that there is no dynamic range. Because of this, I have to reduce the volume so it isn't painful to listen to. And this makes what should have been a live recording with interesting sound into background noise. Example:

    https://www.youtube.com/watch?v=3Gmex_4hreQ

    If you want a recent-ish album to listen to that has good sound, try Daft Punk's Random Access Memories (which won the Best Engineered Album Grammy award in 2014). Or anything engineered by Alan Parsons (he's in this list many times)

    https://en.wikipedia.org/wiki/Grammy_Award_for_Best_Engineer...

    • > now

      Is this still a problem? Your example video is from nearly twenty years ago, RAM is over a decade old. I think the advent of streaming (and perhaps lessons learned) have made this less of a problem. I can't remember hearing any recent examples (but I also don't listen to a lot of music that might be victim to the practice); the Wikipedia article lacks any examples from the last decade https://en.wikipedia.org/wiki/Loudness_war

      Thankfully there have been some remasters that have undone the damage. Three Cheers for Sweet Revenge and Absolution come to mind.

      8 replies →

    • I was obsessed with Tales of Mystery & Imagination, I Robot, and Pyramids in the 70s. I also loved Rush, Yes, ELP, Genesis, and ELO, but while Alan Parsons' albums weren't better in an absolute musical sense, his production values were so obviously in a class of their own I still put Parsons in the same bucket as people like Trevor Horn and Quincy Jones, people who created masterpieces of record album engineering and production.

  • > decent German and Japanese components

    Whoa there! Audio components were about the only thing the British still excelled at by that time.

    • I wasn't aware of home hi-fi but British gear for musicians was widespread when I was growing up (Marshall, Vox, etc).

      I was specifically thinking of the components my father got through the Army PX in the 60s and the hi-fi gear I would see at some friends' houses in the decades that followed ... sometimes tech that never really took hold, such as reel-to-reel audio. Most of it was Japanese, and sometimes German.

      I still have a pair of his 1967 Sansui speakers in the basement (one with a blown woofer, unfortunately) and a working Yamaha natural sound receiver sitting next to my desk from about a decade later.

      1 reply →

  • I've noticed this with lots of jazz from the 50s and 60s. Sounds amazing in mono but "lacking" in stereo.

    • That’s more due to mono being the dominant format at the time so the majority of time and money went to working on the mono mix. The stereo one was often an afterthought until stereo became more widespread and demand for good stereo mixes increased.

  • The same with movie sound mixing, where directors like Nolan are infamous for muffling dialogue in home setups because he wants the sound mixed for large, IMAX scale theater setups.

I've always been a fan of repos that I come across with ARCHITECTURE.md files in them, but that's a pretty loose framework and some just describe the what and not the why.

Otherwise, I wish I worked at a place like Oxide that does RFDs. https://rfd.shared.oxide.computer Just a single place with artifacts of a formal process for writing shit down.

In your example, writing down "The greens are oversaturated by X% because we will lose a lot of it in the transfer process to film" goes a long way in at least making people aware of the decision and why it was made, at least then the "hey actually the boosted greens look kinda nice" can prompt a "yeah but we only did that because of the medium we were shipping on, it's wrong"

  • You're assuming people RTFM, which does not happen at all in my case. Documentation exists for you to link to when someone already lost days on something finally reaches out.

    • Culture changes under the impact of technology, but culture also changes when people deliberately teach practices.

(Cough) Abstraction and separation of concerns.

In Toy Story's case, the digital master should have had "correct" colors, and the tweaking done in the transfer to film step. It's the responsibility of the transfer process to make sure that the colors are right.

Now, counter arguments could be that the animators needed to work with awareness of how film changes things; or that animators (in the hand-painted era) always had to adjust colors slightly.

---

I think the real issue is that Disney should know enough to tweak the colors of the digital releases to match what the artists intended.

  • Production methodolgies for animated films have progressed massively since 1995 and Pixar may have not found the ideal process for the color grading of the digital to film step. Heck, they may not have color graded at all! This has been suggested. I agree that someone should know better than to just take a render and push it out as a digital release without paying attention to the result.

  • > In Toy Story's case, the digital master should have had "correct" colors

    Could it be the case that generating each digital master required thousands of render hours?

    • But the compensation for film should be a cheap 2-D color filter pass, not an expensive 3-D renering pass.

    • That's an invalid argument: Digitally tweaking color when printing film has nothing to do with how long it takes to render 3d.

      They had a custom built film printer and could make adjustments there.

I know you're looking for something more universal, but in modern video workflows you'd apply a chain of color transformations on top the final composited image to compensate the display you're working with.

So I guess try separating your compensations from the original work and create a workflow that automatically applies them

That’s a great observation. I’m hitting the same thing… yesterday’s hacks are today’s gospel.

My solution is decision documents. I write down the business problem, background on how we got here, my recommended solution, alternative solutions with discussion about their relative strengths and weaknesses, and finally and executive summary that states the whole affirmative recommendation in half a page.

Then I send that doc to the business owners to review and critique. I meet with them and chase down ground truth. Yes it works like this NOW but what SHOULD it be?

We iterate until everyone is excited about the revision, then we implement.

  • There are two observations I've seen in practice with decision documents: the first is that people want to consume the bare minimum before getting started, so such docs have to be very carefully written to surface the most important decision(s) early, or otherwise call them out for quick access. This often gets lost as word count grows and becomes a metric.

    The second is that excitement typically falls with each iteration, even while everyone agrees that each is better than the previous. Excitement follows more strongly from newness than rightness.

  • Eventually you'll run into a decision that was made for one set of reasons but succeeded for completely different reasons. A decision document can't help there; it can only tell you why the decision was made.

    That is the nature of evolutionary processes and it's the reason people (and animals; you can find plenty of work on e.g. "superstition in chickens") are reluctant to change working systems.

Theory: Everything is built on barely functioning ruins with each successive generation or layer mostly unaware of the proper ways to use anything produced previously. Ten steps forward and nine steps back. All progress has always been like this.

  • I’ve come to similar conclusions, and further realized that if you feel there’s a moment to catch your breath and finally have everything tidy and organized, possibly early sign of stagnation or decline in an area. Growth/progress is almost always urgent and overwhelming in the moment.

Do you have some concrete or specific examples of intentional compensation or purposeful scaffolding in mind (outside the topic of the article)?

  • Not scaffolding in the same way, but, two examples of "fetishizing accidental properties of physical artworks that the original artists might have considered undesirable degradations" are

    - the fashion for unpainted marble statues and architecture

    - the aesthetic of running film slightly too fast in the projector (or slightly too slow in the camera) for an old-timey effect

    • Isn’t the frame rate of film something like that?

      The industry decided on 24 FPS as something of an average of the multiple existing company standards and it was fast enough to provide smooth motion, avoid flicker, and not use too much film ($$$).

      Overtime it became “the film look”. One hundred-ish years later we still record TV shows and movies in it that we want to look “good” as opposed to “fake” like a soap opera.

      And it’s all happenstance. The movie industry could’ve moved to something higher at any point other than inertia. With TV being 60i it would have made plenty of sense to go to 30p for film to allow them to show it on TV better once that became a thing.

      But by then it was enshrined.

    • Another example: pixel art in games.

      Now, don't get me wrong, I'm a fan of pixel art and retro games.

      But this reminds me of when people complained that the latest Monkey Island didn't use pixel art, and Ron Gilbert had to explain the original "The Curse of Monkey Island" wasn't "a pixel art game" either, it was a "state of the art game (for that time)", and it was never his intention to make retro games.

      Many classic games had pixel art by accident; it was the most feasible technology at the time.

      2 replies →

    • Great examples. My mind jumps straight to audio:

      - the pops and hiss of analog vinyl records, deliberately added by digital hip-hop artists

      - electric guitar distortion pedals designed to mimic the sound of overheated tube amps or speaker cones torn from being blown out

      3 replies →

  • I work in vfx, and we had a lecture from one of the art designers that worked with some formula 1 teams on the color design for cars. It was really interesting on how much work goes into making the car look "iconic" but also highlight sponsors, etc.

    But for your point, back during the pal/ntsc analog days, the physical color of the cars was set so when viewed on analog broadcast, the color would be correct (very similar to film scanning).

    He worked for a different team but brought in a small piece of ferrari bodywork and it was more of a day-glo red-orange than the delicious red we all think of with ferrari.

In some projects I work on I've added a WHY.md at the root that explains what's scaffolding and what's load bearing, essentially. I can't say it's been effective at preventing the problem you outlined, but at least it's cathartic.

Isn't the entire point of "reinventing the wheel" to address this exact problem?

This is one of the tradeoffs of maintaining backwards compatibility and stewardship -- you are required to keep track of each "cause" of that backwards compatibility. And since the number of "causes" can quickly become enumerable, that's usually what prompts people to reinvent the wheel.

And when I say reinvent the wheel, I am NOT describing what is effectively a software port. I am talking about going back to ground zero, and building the framework from the ground up, considering ONLY the needs of the task at hand. It's the most effective way to prune these needless requirements.

It seems pretty common in software - engineers not following the spec. Another thing that happens is the pivot. You realize the scaffolding is what everyone wants and sell that instead. The scaffold becomes the building and also product.

"Cargo cult"? As in, "Looks like the genius artists at Pixar made everything extra green, so let's continue doing this, since it's surely genius."