Comment by jrockway

5 years ago

If you tried to turn the sensor data into light again, there would not be enough information to do so accurately. Everything is built around human perception of color. When a photon hits your eye, it produces a "tristimulus value" for your brain. (The tristimulus is produced by "S", "M", and "L" sensitive-cones; these roughly correspond to blue, green, and red; that's why those colors are what we use for base colors. But, you can use other colors, and you could use more than 3 if you wanted to. There is no law of the universe that splits colors into red/green/blue parts... that's just a quirk of human anatomy.)

The goal of a digital camera is to be sensitive to colors in the same way that your eyes are. If that tristimulus can be recorded and played back, your brain won't know the difference. The colors your monitor emits when viewing a photograph could be totally unrelated to what was in the original scene, but since your brain is just looking for a tristimulus, as long as that same tristimulus is produced, you won't be able to tell the difference.

(Fun fact -- there are colors you can see that don't correspond to a wavelength of light. There is no single wavelength of light that fully stimulates your S and L cones, but plenty of things are magenta.)

TL;DR: computerized color is basically hacking your brain, and it works pretty well!