← Back to context

Comment by dTal

5 years ago

A large lens captures information over an area, and so to a certain extent can "see around" out of focus objects. A selective blur of a fully focused scene captured from a single viewpoint (i.e. a small lens) can only approximate this effect, because it simply doesn't have access to the same information. Even with a perfect depth map, you still don't know what's behind occluded objects.

If instead of resolving points of light on the image sensor, you use a group of pixels to resolve an entire tiny image you can effectively also see around things. You end up with a picture of many small sections of the large image each at a different angle. The image on the far left of the image would see a different angle than the image on the far right. This is exactly what the Lytro camera did and it's why you can take the picture first and focus later. Of course you sacrifice overall imagine resolution quite severely:

* https://www.researchgate.net/figure/a-b-The-first-and-second...

  • Well yes but you're still going to need a sensor as large as the aperture of the lens you want to simulate, which makes it a non-starter for phones.