Comment by Veedrac
5 days ago
A pixel is simply not a point sample. A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas. A modern display does not reconstruct an image the way a DAC reconstructs sounds, they render little rectangles of light, generally with visible XY edges.
The paper's claim applies at least somewhat sensibly to CRTs, but one mustn't imagine the voltage interpolation and shadow masking a CRT does corresponds meaningfully to how modern displays work... and even for CRTs it was never actually correct to claim that pixels were point samples.
It is pretty reasonable in the modern day to say that an idealized pixel is a little square. A lot of graphics operates under this simplifying assumption, and it works better than most things in practice.
> A camera does not take point sample snapshots, it integrates lightfall over little rectangular areas.
Integrates this information into what? :)
> A modern display does not reconstruct an image the way a DAC reconstructs sounds
Sure, but some software may apply resampling over the original signal for the purposes of upscaling, for example. "Pixels as samples" makes more sense in that context.
> It is pretty reasonable in the modern day to say that an idealized pixel is a little square.
I do agree with this actually. A "pixel" in popular terminology is a rectangular subdivision of an image, leading us right back to TFA. The term "pixel art" makes sense with this definition.
Perhaps we need better names for these things. Is the "pixel" the name for the sample, or is it the name of the square-ish thing that you reconstruct from image data when you're ready to send to a display?
> Integrates this information into what? :)
Into electric charge? I don’t understand the question, and it sounds like the question is supposed to lead readers somewhere.
The camera integrates incoming light into a tiny square into an electric charge and then reads out the charge (at least for a CCD), giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel. So it’s a measurement over the tiny square, not a point sample.
> The camera integrates incoming light into a tiny square [...] giving a brightness (and with the Bayer filter in front of the sensor, a color) for the pixel
This is where I was trying to go. The pixel, the result at the end of all that, is the single value (which may be a color with multiple components, sure). The physical reality of the sensor having an area and generating a charge is not relevant to the signal processing that happens after that. For Smith, he's saying that this sample is best understood as a point, rather than a rectangle. This makes more sense for Smith, who was working in image processing within software, unrelated to displays and sensors.
6 replies →
A slightly tangential comment: integrating a continuous image on squares paving the image plane might be best viewed as applying a box filter to the continuous image, resulting in another continuous image, then sampling it point-wise at the center of each square.
It turns out that when you view things that way, pixels as points continues to make sense.
The representation of pixels on the screen is not necessarily normative for the definition of the pixel. Indeed, since different display devices use different representations as you point out, it can't really be. You have to look at the source of the information. Is it a hit mask for a game? Then they are squares. Is it a heatmap of some analytical function? Then they are points. And so on.
DACs do a zero-order hold, which is equivalent to a pixel as a square.
They contain zero-order holds, but they also contain reconstruction filters. Ideal reconstruction filters have infinite width, which makes them clearly different in kind. And most DACs nowadays are Delta-Sigma, so the zero-order hold is on a binary value at a much higher frequency than the source signal. The dissimilarities matter.