Comment by justin_

5 days ago

> The camera integrates incoming light into a tiny square [...] giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel

This is where I was trying to go. The pixel, the result at the end of all that, is the single value (which may be a color with multiple components, sure). The physical reality of the sensor having an area and generating a charge is not relevant to the signal processing that happens after that. For Smith, he's saying that this sample is best understood as a point, rather than a rectangle. This makes more sense for Smith, who was working in image processing within software, unrelated to displays and sensors.

It’s a single value, but it’s an integral over the square, not a point sample. If a shine a perfectly focused laser very close to the corner of one sensor pixel, I’ll still get a brightness value for the pixel. If it were a point sample, only the brightness at a single point would give an output.

And depending on your application, you absolutely need to account for sensor properties like pixel pitch and color filter array. It affects moire pattern behavior and creates some artifacts.

I’m not saying you can’t think of a pixel as a point sample, but correcting other people who say it’s a little square is just wrong.

  • > It affects moire pattern behavior and creates some artifacts.

    Yes. The spacing between point samples determines the frequency, a fundamental characteristic of a dsp signal.

It's never a point source. Light is integrated over a finite area to form a singke color sample. During Bayer mosaicking, contributions from neighbouring pixels are integrated to form samples of complementary color channels.

  • > Light is integrated over a finite area to form a singke color sample. During Bayer mosaicking, contributions from neighbouring pixels are integrated to form samples of complementary color channels.

    Integrated into a single color sample indeed. After all the integration, mosaicking, and filtering, a single sample is calculated. That’s the pixel. I think that’s where the confusion is coming from. To Smith, the “pixel” is the sample that lives in the computer.

    The actual realization of the image sensors and their filters are not encoded in a typical image format, nor used in a typical high level image processing pipelines. For abstract representations of images, the “pixel” abstraction is used.

    The initial reply to this chain focused on how camera sensors capture information about light, and yes, those sensors take up space and operate over time. But the pixel, the piece of data in the computer, is just a point among many.

    • > Integrated into a single color sample indeed. After all the integration, mosaicking, and filtering, a single sample is calculated. That’s the pixel. I think that’s where the confusion is coming from. To Smith, the “pixel” is the sample that lives in the computer.

      > But the pixel, the piece of data in the computer, is just a point among many.

      Sure, but saying that this is the pixel, and negating all other forms as not "true" pixels is arbitrary. The real-valued physical pixels (including printer dots) are equally valid forms of pixels. If anything, it would be impossible for humans to sense the pixels without interacting with the real-valued forms.

    • IMO Smith misapplied the term "pixel" to mean "sample". A pixel is a physical object, a sample is a logical value that corresponds in some way to the state of the physical object (either as an input from a sensor pixel or an output to a display pixel) but is also used in signal processing. Samples aren't squares, pixels (usually) are.