Comment by kevincox
3 years ago
I'm not very convinced. It looks like they are doing some processing, maybe special to the moon but it looks more like some form of sharpening or contrast boosting than adding detail. In all of the examples it seems that there is information in the original (dark spots) that are getting boosted.
It would be interesting to see this tried on a source image that isn't the moon. Just white with a few dark spots. Does it actually add in completely new craters, or just where there are existing smudges? Or do something like half of a moon photo and have white, does it add craters to the white side?
The OP tried to do this by changing the contrast but I failed to see any craters appearing where there wasn't already dark spots in the source photo.
It does seem strange that the OP is using an image of the moon to start and that they don't provide a still shot of the one where they modified the brightness levels to cause clipping. It doesn't really "drive the point home" as claimed.
Of course the answer to these may be that you need something moon-like enough to trigger the moon optimizations. But if that is the answer it would be interesting to see something that comes right up to the threshold where it either snaps in and out of these optimizations or two very similar images produce widely different results.
> In all of the examples it seems that there is information in the original (dark spots) that are getting boosted.
That was the point of the Gaussian blur. By blurring the source image of the moon before taking its picture, there was no information. The Gaussian blur destroys fine detail, by design.
The image enhancement applied by the Samsung phone is adding detail where there was none originally – not just detail that might be buried in the optics somewhere, but not there at all. It involves a computational model of what the target (here the moon) "should" look like and guesses at what it thinks are the blurry bits.
No, Gaussian blur does not "destroy" fine detail. It is still there it is just that the "volume" of it is turned down. You can put all the fine detail back again by applying the inverse filter.
Gaussian blur is reversible with deconvolution - however that is almost certainly not what is happening here, as it’s fairly computationally expensive.
It may not be expensive if it’s being approximately done by a neural network that implicitly learned to do it when sharpening images.
2 replies →
But I don't see it adding any detail over the blur. It really just looks like it is boosting contrast a bit on top of the blur. I don't see anything like ridges or defined features appearing.