Comment by grandempire
5 days ago
Ok, but that’s not what an digital image is. Images are designed to be invariant across camera capture and display hardware. The panel driver should interpret the dsp representation into an appropriate electronic pixel output.
Yeah but the article is about a pixel, which has different meanings. Making blanket statements is not helpful in resolving definitions.
Truth is, a pixel is both a sample and a transducer. And in transduction, a pixel is both an integrator and an emitter.
I’ll quote my other comment:
> If you are looking to understand how your operating system will display images, or how your graphics drivers work, or how photoshop will edit them, or what digital cameras aim to produce, then it’s the point sample definition.
Sometimes yes. Sometimes no! There are certainly situations where a pixel will be scaled, displayed, edited or otherwise treated as a little square.
16 replies →
Well, "what a digital image is" is a sequence of numbers. There's no single correct way to interpret the numbers, it depends on what you want to accomplish. If your digital image is a representation of, say, the dead components in an array of sensors, the signal processing theoretic interpretation of samples may not be useful as far as figuring out which sensors you should replace.
> There's no single correct way to interpret the numbers
They are just bits in a computer. But there is a correct way of to interpret them in a particular context. For example 32 bits can be meaningless - or it can have an interpretation as a twos complement integer which is well defined.
If you are looking to understand how an operating system will display images, or how graphics drivers work, or how photoshop will edit them, or what digital cameras produce, then it’s the point sample definition.
Cameras don't take point samples. That's an approximation, just as inaccurate as a rectangle approximation.
And for pixel art, the intent is usually far from points on a smooth color territory.
Multiple interpretations matter within different contexts inside the computer context.
8 replies →
We commonly use hardware like LCDs and printers that render a sharp transition between pixels without the Gibbs' phenomenon. CRT scanlines were close to an actual 1D signal (but not directly controlled by the pixels, which the video cards still tried to make square-ish), but AFAIK we've never had a display that is a 2D signal that we assume in image processing.
In signal processing you have a finite number of samples of an infinitely precise contiguous signal, but in image processing you have a discrete representation mapped to a discrete output. It's contiguous only when you choose to model it that way. Discrete → contiguous → discrete conversion is a useful tool in some cases, but it's not the whole story.
There are images designed for very specific hardware, like sprites for CRT monitors, or font glyphs rendered for LCD subpixels. More generally, nearly all bitmap graphics assumes that pixel alignment is meaningful (and that has been true even in the CRT era before the pixel grid could be aligned with the display's subpixels). Boxes and line widths, especially in GUIs, tend to be designed for integer multiples of pixels. Fonts have/had hinting for aligning to the pixel grid.
Lack of grid alignment, an equivalent of a phase shift that wouldn't matter in pure signal processing, is visually quite noticeable at resolutions where the hardware pixels are little squares to the naked eye.
I think you are saying there are other kinds of displays which are not typical monitors and those displays show different kinds of images - and I don’t disagree.
I'm saying "digital images" are captured by and created for hardware that has the "little squares". This defines what their pixels really are. Pixels in these digital images actually represent discrete units, and not infinitesimal samples of waveforms.
Since the pixels never were a waveform, never were sampled from such signal (even light in camera sensors isn't sampled along these axis), and don't get displayed as a 2D waveform, the pixels-as-points model from the article at the top of this thread is just an arbitrary abstract model, but it's not an accurate representation of what pixels are.
Well the camera sensor captures a greater dynamic range than the display or print media or perhaps even your eyes, so something has to give. If you ever worked with a linear file without gamma correction you will understand what I mean.
And that full dynamic range is in the images’s point samples, ready to be remapped for a physical output.
That's only for images coming directly from a camera. If the images were generated in another way, the idea that a pixel is a little square is sometimes ok (example, pixel art)