Comment by pixelesque

3 hours ago

It's possible to save RAW files (mostly) unprocessed with iPhones, either via built-in functionality (Pros) or via apps like Halide.

But the aggressiveness of the de-noising in the native JPG/HEIF images otherwise is really unfortunate if you want to look at the images on a screen larger than the phone's screen. The amount of detail lost (other than in areas like people's faces where the phone knows to specialise) can be very considerable.

I'd really like a way to dial that aggressiveness down a fair bit, even at the cost of more noise/grain and larger file size (through less compression due to the extra noise).

Another thing is the amount of lens flare you can get when shooting at the sun for sunsets/rises, etc or other large bright light sources. With very small lens elements, from a physics perspective it's understandable that suppressing the reflections and inter-reflections is very difficult on such a small surface area (even with special coatings to reduce the fresnel reflection ratios), but if you care about image quality and wanting to look at images on screen larger than the phone which took them, larger format cameras still have some benefit despite their larger and heavier size and therefore inconvenience (looks at 5D Mk IV on shelf).

I wish there was a middle ground between what Android/Pixel camera saves as raw, and the in-camera JPEG. Sometimes I have a few quibbles with the JPEG and what I'd like to do is edit the raw file, but starting from something close to the JPEG. Unfortunately what you get as a starting point from raw is hideous, and it's never clear how to begin. I don't think I've ever got an acceptable result trying to edit raw photos from my Pixel.

  • For Android, you can sort of get some of this with Snapseed. I occasionally use it, and it's "ok". I'm more frustrated by the fact that my preferred RAW editor (DxO) doesn't handle Android's DNG files. For me, at least, editing raw images on a phone screen is just not tolerable.

  • In other words, you want either your camera app to select the initial tweaks for you to be able continue in the external editor (not going to happen, RAW editing software is incompatible by design), or your editing software to select the initial tweaks that "look good" (that depends on your software). In RAW mode, Google Camera's output is photometrically correct, even if it stacks multiple frames or denoises it. Which is the only way to do it that makes sense, any other RAW camera app or actual dedicated camera does this the same way.

It's strange that in the age of AI, denoisers are still so bad. It's basically impossible to photograph snowing in the winter because the denoiser will remove 90% of the snowflakes. Machine learning models are already used for denoising ray traced graphics with substantially improved results, so why is it that cameras aren't using ML denoisers yet? At least for still images. Or do they perhaps already use them, only the quality is still bad for unknown reasons?

  • (As someone who worked closely with pathtracing renderers and de-noisers, I think I can answer this :) )

    It's mostly because in the VFX/CG space for ray tracing/path tracing de-noisers, they almost always rely on extra outputs/AOVs of things like 'albedo' (diffuse reflectance), normal / world position, etc, to help guide them in many cases.

    So they often can 'cheat' a bit, and know where the edges of things are (because say the object ID AOV changes - minus pixel filtering, which complicates things a bit).

    They can also 'cheat' in other ways, by mixing back in some of the diffuse texture detail that the denoiser might have removed from the 'albedo' AOV channel.

    Cameras don't really have anything to guide them, so they have to guess. And often, they seem to use very primitive methods like bi-lateral filters (or at least things which look very similar), to try and guide them, but it doesn't work very well.

    Portrait cameras on phones can use depth sensors a bit to help if the camera has them, but for things like hair strands, it doesn't really work, and is mostly useful for fake-depth-of-field depth-based blurring.

    • Yeah, but surely ML models would at least work better than analytic algorithms. After all, when looking at a noisy picture, our brain is pretty good at distinguishing detail from noise, so it's not clear to me why an ML model couldn't have denoising performance similar to the human brain, even if it doesn't match the "cheating" denoisers used in ray tracing.

      2 replies →

  • Are we still talking about smartphone cameras? If yes, apps already heavily rely on much more advanced computational photography than your average photo editor can do, including but not limited to ML denoisers. The problem is that such apps are typically optimized for the "average case" and are as automated as possible, so they either remove snow, rain, and haze intentionally, or lose small moving particles as the result of stacking. That said, snow and rain are usually possible to capture in the apps that attempt to determine the scene type or have specific modes.