Comment by jlokier
2 days ago
> If the input image is only simple lines whose coverage can be correctly computed (don't know how to do this for curves?) then what's missing?
Computing pixel coverage accurately isn't enough for the best results. Using it as the alpha channel for blending forground over background colour is the same thing as sampling a box filter applied to the underlying continuous vector image.
But often a box filter isn't ideal.
Pixels on the physical screen have a shape and non-uniform intensity across their surface.
RGB sub-pixels (or other colour basis) are often at different positions, and the perceptual luminance differs between sub-pixels in addition to the non-uniform intensity.
If you don't want to tune rendering for a particular display, there are sometimes still improvements from using a non-box filter
An alternative is to compute the 2D integral of a filter kernel over the coverage area for each pixel. If the kernel has separate R, G, B components, to account for sub-pixel geometry, then you may require another function to optimise perceptual luminance while minimising colour fringing on detailed geometries.
Gamma correction helps, and fortunately that's easily combined with coverage. For example, slow rolling tile/credits will shimmer less at the edges if gamme is applied correctly.
However, these days with Retina/HiDPI-style displays, these issues are reduced.
For example, MacOS removed sub-pixel anti-aliasing from text rendering in recent years, because they expect you to use a Retina display, and they've decided regular whole-pixel coverage anti-aliasing is good enough on those.
No comments yet
Contribute on Hacker News ↗