Comment by Terretta
3 years ago
Author tested for this by doing the experiment again with detail clipped into highlights, completely gone, model detail was added back.
> To further drive home my point, I blurred the moon even further and clipped the highlights, which means the area which is above 216 in brightness gets clipped to pure white - there's no detail there, just a white blob - https://imgur.com/9XMgt06
> I zoomed in on the monitor showing that image and, guess what, again you see slapped on detail, even in the parts I explicitly clipped (made completely 100% white): https://imgur.com/9kichAp
While I think this is a great test, I'm not really sure what that second picture is supposed to be showing. Kinda seems like they used the wrong picture entirely.
Second image is a video, shows them zooming in and how it switches from the blob to detail
Ah! Thank you! I wasn't getting the controls for some reason.
Given how small the pure-white areas are, tbh I'm not sure I'd consider that as having "added detail". It has texture that matches the rest of the moon, but that's about as far as I'd be comfortable claiming... and that seems fine, basically an un-blurring artifact like you see in tons of sharpening algorithms.
I do think this "clip the data, look for impossible details" is a very good experiment and one that seems likely to bear fruit, since it's something cameras "expect" to encounter. I just don't think this instance is all that convincing.
---
And to be clear, I absolutely believe Samsung is faking it, and hiding behind marketing jargon. The outputs are pretty ridiculous. They may not be "photoshopping on a texture", but training an AI on moon pictures and asking it to add those details to images is... well, the same thing Photoshop has features for. It makes no difference - it's not maximizing the data available, it's injecting external data, and they deserve to be slammed for that.
I watched the video and in this case the "recovered" detail is clearly natural to me. The original case does look like some kind of moon-specific processing, but this one with clipped highlights seems natural and can be achieved using classical CV.
What? Clipped means gone - the pixel is FFFFFF - how can CV look at a FFFFFF pixel, surrounded by FFFFFF pixels, and get out a moon-looking pixel?
Because the nearby pixels are not clipped.