Light sources in video games and such. If you have a light source with a very large falloff range illuminating a large area, you'll have noticable steps in the gradient.
Ordered dithering is a very cheap solution to this.
Truly random per-frame noise looks bad and grainy (imho), but various noise functions work well, yes.
Many implementations just sample some noise texture, possibly because that's cheaper - but hardware is so fast nowadays that even sampling some non-trivial noise function many times per pixel hardly registers.
A deferred 2.5D renderer I wrote some while ago just does this screen-wide on the entire framebuffer in a post process step and that pretty much hides all banding already:
You might call this random noise (though it's static). It's enough if you're operating in high precision framebuffers for most rendering steps, and will do a decent job at hiding banding when things are downsampled to whatever the screen supports.
If you can't afford to just rgba16float everything you might have to be smarter about what you do on an individual light level. Probably using some fancier noise/making sure overlapping lights don't amplify the noise.
Shallow color gradients (e.g. blue sky or anime) result in visible banding on 8bpc displays, which is a large fraction of displays.
Ordered dithering is GPU-friendly, so it's useful to reduce higher-bpc content to those display formats without introducing banding.
Any time you want a sequence to be deterministic, seem familiar, and also have a roughly even mix of element types. Think shuffling playlists, ordering search results, etc.
I would think that it would only be beneficial on devices that don't maintain a full frame rendering buffer or if they wanted to do partial updates.
If the full frame is maintained with more values then quite a lot of things like Floyd Steinberg optimize well enough to be integrated with a full frame update.
Light sources in video games and such. If you have a light source with a very large falloff range illuminating a large area, you'll have noticable steps in the gradient.
Ordered dithering is a very cheap solution to this.
Wouldn't random noise be a more appropriate solution in that case?
Truly random per-frame noise looks bad and grainy (imho), but various noise functions work well, yes.
Many implementations just sample some noise texture, possibly because that's cheaper - but hardware is so fast nowadays that even sampling some non-trivial noise function many times per pixel hardly registers.
A deferred 2.5D renderer I wrote some while ago just does this screen-wide on the entire framebuffer in a post process step and that pretty much hides all banding already:
You might call this random noise (though it's static). It's enough if you're operating in high precision framebuffers for most rendering steps, and will do a decent job at hiding banding when things are downsampled to whatever the screen supports.
If you can't afford to just rgba16float everything you might have to be smarter about what you do on an individual light level. Probably using some fancier noise/making sure overlapping lights don't amplify the noise.
Shallow color gradients (e.g. blue sky or anime) result in visible banding on 8bpc displays, which is a large fraction of displays. Ordered dithering is GPU-friendly, so it's useful to reduce higher-bpc content to those display formats without introducing banding.
In video games or graphics, dithering can be an alternative to transparency. It's more performant too. I see this a lot in handheld consoles.
As screen resolution and density increases, dithering could even replace transparency as long as you don't look close enough.
Any time you want a sequence to be deterministic, seem familiar, and also have a roughly even mix of element types. Think shuffling playlists, ordering search results, etc.
Lots of sensors these days will give you 10 or 12 bits of data per color channel. You may want ordered dithering when previewing on an 8 bit display.
Yep e-ink is a good practical use. In fact any system with black and white display use ordered dithering when they want to draw images
I would think that it would only be beneficial on devices that don't maintain a full frame rendering buffer or if they wanted to do partial updates.
If the full frame is maintained with more values then quite a lot of things like Floyd Steinberg optimize well enough to be integrated with a full frame update.