Comment by The_Colonel

5 years ago

I have a question for the more knowledgeable:

Color filters on the sensor split the light into three wavelength ranges - red, green, blue. Then photosites measure intensity of light, which means that sensor only knows that the incoming photon is "roughly green", but doesn't actually recognize its precise wavelength.

So e.g. when the light is pure orange, it falls down into red wavelength range and is counted as red.

Based on this I would expect that cameras would often produce pretty incorrect colors, but they are usually pretty good (after correcting for stuff like white balance).

You are right, this is a challenge. The wavelength of the light cannot be measured directly, only inferred by the intensity of the pixels with the different color filters. On the other side, most reproductions of photos are reproducing the original frequencies either. A computer screen has red, green and blue dots, which produce light at the corresponding wave lenghts. So if you have orange light, you have a signal on the green and red pixels and the green and red dots on your screen will light up. Which will be detected by the sensors for red and green light in your eyes. No where in the chain, not even in your eye, is a sensor for "orange" directly, it is just the mixture of the red and green sensitivity.

It is important to note, that neither the sensor pixels nor your eyes have a complete separate reaction to a wavelength. The sensitive area strongly overlaps. So for hues of green which have rather long wavelengths, you get some reaction on the red pixels, which gets stronger as you move towards orange, where both red and green pixels detect until it gets more and more red and less green. The exact absorption curve of the sensore color filters matters here, that is one reason, different manufacturers have slightly different color rendition. On top of that is calibration, when converting the raw image into a proper RGB-image, one can further balance the response. For that, the color calibration targets are used, which have like 24 patches of different colors. Taking a phone of this target, the calibration software can both calibrate for the light illuminating the target as well as the color response of your camera.

A common reason for red-green colorblindness is that the affected persons have the sensitivity between the red and green colors overlapping too strongly, so they loose the ability to differentiate. A green creates almost as strong a signal in the "red" cells. A way to improve the color vision for those people are glasses which increase that separation by absorbing the frequencies between the red and green colors.

  • > The wavelength of the light cannot be measured directly, only inferred by the intensity of the pixels with the different color filters.

    Well, depending how you think "directly" but you can get pretty far with spectrometer, i.e. device that splits light and measures the intensity spatially to collect a spectrum. It's not impossible thought to build camera based on that principle, just need to sample the light in an array to make pixels

    • I was talking here about the typical foto cameras. Of course you can measure the wavelength of the light with other devices like spectrometers. I was specifically talking about camera sensors which have separate filters in usually 3 colors in front of the pixels. The sensors, which are made by Sigma, formerly Foveon, use a different principle. They determine the wavelength by measuring how deep electrons are generated by the photons in the silicon. The depth depends on the wavelength of the light. However, it is more difficult to get a precise color response that way as you cannot just use predefined color-filters.

They work exactly the same as a screen. Orange is not pure red, it is red + yellow, and yellow is red + green. So orange is red + _some_ green. The in-camera processing will render the image based on the sensor input and the known properties of the filter (think of it as a color profile, a mapping between what the sensors read and the color). And the processing includes color interpolation for each pixel, as each pixel (photosite) only has one color filter, but the resulting image pixel has all three colors; these are calculated based on the neighbor photosites.

Different sensors/cameras have different filters, and combined with the manufacturer specific post-processing this gives different cameras/manufacturers a different color rendition and feel.

It is because our eyes are also "incorrect". The cameras and processing and displays were engineered to match our eyes expectation. If some alien race with a sufficiently different physiology would look at our pictures they could tell that it is off. (and it wouldn't even mean that their eyes are more correct, just that their model is different.)

> So e.g. when the light is pure orange, it falls down into red wavelength range and is counted as red.

If you only work with one photon at a time, then yes, you don't know if the photon passing through the filter was red, or orange, or with smaller probability even green or blue. But when you have trillions of photons passing through, you can "see" the difference by the relative intensity of light.

Remember that at the quantum level, things don't happen deterministically; you have to consider the probability that a given outcome occurs. So the photon has a certain probability to hit the filter and get absorbed, a certain probability to pass through the entire depth of the filter, a certain probability to hit the sensor without generating an event, a certain probability to hit the sensor and initiate a chemical reaction (for film or biological eyes) or an electron cascade (for CCD sensors), a certain probability to quantum tunnel to the other side of the universe...

So getting back to your question, when pure orange photons hit a red filter, many of them will make it through the filter, but not as many as if they were pure red photons. When pure orange photons hit a green filter, some of them will make it through, but not as many as if they were pure green photons. So if your brain knows what the "white point" of a given environment is, relative to that white color it'll see a specific combination of "some red, some green" as orange. (Of course, if known-to-be-white objects are already orange--like when you put on amber ski googles--your brain will eventually adjust and recalibrate to perceive that color as something else... perception is tricky!)

>> So e.g. when the light is pure orange, it falls down into red wavelength range and is counted as red

'Pure' orange light isn't red. It has a wavelength of 590–620 nm. Red is 625–740 nm and Green is 495–570, so 'orange' is between red and green. The sensor filters each allow a range of wavelengths through. So green is triggered as well as the red filter. In RGB terms orange is 255, 127, 0, i.e. with a strong red component and a smaller green component.

White balance is computed downstream from the sensor, and is used to resolve the effect that a coloured light source creates a colour cast on objects, most noticeably on white ones. The human visual system auto-compensates for this but cameras require special processing, sometimes done using presets for different types of light (sunlight, shade, tungsten etc).

Your eyes cannot see orange directly, but as an overlap of different intensities of the stimulation of different receptors in the base colors.

So as long as the filters in the camera have roughly the same transmission curve as the sensitivity curve of the color receptors, all is good.

However, to animals (or hypothetical aliens) with other color receptors, the images produced by photo prints and screens would look quite weird, with colors all wrong.

Filters allow rather large families of wavelengths through. You don't have just red, you have a fraction of green. Because your eyes only have three types of photoreceptors they can be fooled by playing back a similarly large family of wavelengths back.

Not an expert but: orange color code is FFA500. It has red and green. I expect when orange hits the sensor, it will register as X amount of read and less than X green?

  • That is in the rgb color model. In physical world, there are (almost) infinitely many different spectra that could be perceived as same "orange". It is honestly pretty amazing how well human color vision works despite that

  • That is correct. The sensitivity of the "red" and "green" pixels overlaps in the orange light frequencies.

If you tried to turn the sensor data into light again, there would not be enough information to do so accurately. Everything is built around human perception of color. When a photon hits your eye, it produces a "tristimulus value" for your brain. (The tristimulus is produced by "S", "M", and "L" sensitive-cones; these roughly correspond to blue, green, and red; that's why those colors are what we use for base colors. But, you can use other colors, and you could use more than 3 if you wanted to. There is no law of the universe that splits colors into red/green/blue parts... that's just a quirk of human anatomy.)

The goal of a digital camera is to be sensitive to colors in the same way that your eyes are. If that tristimulus can be recorded and played back, your brain won't know the difference. The colors your monitor emits when viewing a photograph could be totally unrelated to what was in the original scene, but since your brain is just looking for a tristimulus, as long as that same tristimulus is produced, you won't be able to tell the difference.

(Fun fact -- there are colors you can see that don't correspond to a wavelength of light. There is no single wavelength of light that fully stimulates your S and L cones, but plenty of things are magenta.)

TL;DR: computerized color is basically hacking your brain, and it works pretty well!