Comment by surfingdino

5 days ago

A pixel is a sample or a collection of values of the Red, Green, and Blue components of light captured at a particular location in a typically rectangular area. Pixels have no physical dimensions. A camera sensor has no pixels, it has photosites (four colour sensitive elements per one rectangular area).

And what’s the difference between a photosite and a pixel? Sounds like a difference made up to correct other people.

  • A photosite is a set of four photosensitive electronic sensors that register levels of RGB components of light https://www.cambridgeincolour.com/tutorials/camera-sensors.h... The camera sensor turns data captured by a a single photosite into a single data structure (a pixel), a tuple of as many discreet values as there are components in a given colour space (three for RGB).

    • If you want to be pedantic, you shouldn’t say that the photosite has 4 sensors, depending on the color filter array you can have other numbers like 9 or 36, too.

      And the difference is pure pedantry, because each photosite corresponds to a pixel in the image (unless we’re talking about lens correction?). It’s like making up a new word for monitor pixels because those are little lights (for OLED) while the pixel is just a tuple of numbers. I don’t see why calling the sensor grid items „pixels“ is misunderstandable in any way.

      1 reply →

    • I didn't think a single photosite was directly converted to a single pixel, there's quite a number of different demosaicing algorithms.

      Edit: Upon doing some more reading it sounds like a photosite or sensel, isn't a group of sensors, but a single sensor, which can pick up r,g,b,.. light - "each individual photosite, remember, records only one colour – red, green or blue" - https://www.canon-europe.com/pro/infobank/image-sensors-expl...

      I couldn't seem to find a particular name for the RGGB/.. pattern that a bayer filter consists of an array of.