Comment by foo_barrio
5 years ago
If instead of resolving points of light on the image sensor, you use a group of pixels to resolve an entire tiny image you can effectively also see around things. You end up with a picture of many small sections of the large image each at a different angle. The image on the far left of the image would see a different angle than the image on the far right. This is exactly what the Lytro camera did and it's why you can take the picture first and focus later. Of course you sacrifice overall imagine resolution quite severely:
* https://www.researchgate.net/figure/a-b-The-first-and-second...
Well yes but you're still going to need a sensor as large as the aperture of the lens you want to simulate, which makes it a non-starter for phones.
Yea, this type of image recording has huge compromises but I personally find it really cool.
So do I! If I had a bit more photography budget I'd try them out.
Also, I'm excited about cameras with dual/quad pixel AF, that are kind of a hybrid between lightfield and traditional cameras. I wonder what kind of sorcery one would be able to do with the light field data in those cameras!