Researchers develop a camera that can focus on different distances at once

4 days ago (engineering.cmu.edu)

It is a new neat idea to selectively adjust focus distance for different regions of the scene!

- processing: while there is no post processing, it needs scene depth information which requires pre computation, segmentation and depth estimation. Not a one-shot technique and quality depends on computational depth estimates being good

- no free lunch. The optical setup needs to trade in some light for this cool effect to work. Apart from the limitations of the prototype, how much loss is expected in theory? How does this compare to a regular camera setup with lower aperture? F/36 seems excessive for comparison.

- resolution - what resolutions have been achieved? (maybe not the 12 MPixels of the sensor? For practical or theoretical reasons? ) What depth range can the prototype capture? "photo of Paris Arc de triumphe displayed on a screen". This is suspiciously omitted

- how does the bokeh look like when out of focus? At the edge of an object? The introduction of weird or unnatural artifacts would seriously limit the acceptance

Don't get me wrong - nice technique! But to my liking the paper is omitting fundamental properties

Isn't this the lytro camera?

  • I believe the lytro camera was a plenoptic, or light field, camera. Light field cameras capture information about the intensity together with the direction of light emanating from a scene. Conventional cameras record only light intensity at various wavelengths.

    While conventional cameras capture a single high-resolution focal plane and light field cameras sacrifice resolution to "re-focus" via software after the fact, the CMU Split-Lohmann camera provides a middle ground, using an adaptive computational lens to physically focus every part of the image independently. This allows it to capture a "deep-focus" image where objects at multiple distances are sharp simultaneously, maintaining the high resolution of a conventional camera while achieving the depth flexibility of a light field camera without the blur or data loss.

    Something I find interesting is that while holograms and the CMU camera both manipulate the "phase" of light, they do so for opposite reasons: a hologram records phase to recreate a 3D volume, whereas the CMU camera modulates phase to fix a 2D image.

  • I remember Lytro. There was a lot of fanfare behind that company and then they fizzled. They had a lauded CEO/founder and their website demonstrated clearly how the post-focus worked. It felt like they were going to be the next camera revolution. Their rise and demise story would make a good Isaacson-style documentary.

Paper has some more useful examples:

https://imaging.cs.cmu.edu/svaf/static/pdfs/Spatially_Varyin...

As soon as I saw the headline, I began thinking about microphotography- no more blurry microbes! I could get excited for something like this.

I wonder if this camera might somehow record depth information, or be modified to do such a thing.

That would make it really useful, maybe replacing carmera+lidar.

  • It even requires depth information -

    While this methods has no post processing, it requires a pre processing step to pre-calture the scene, segment it, estimate depth an compute the depth map.