← Back to context

Comment by ttoinou

1 day ago

But if you had an algorithm that would change directions of rays, you would have a resulting image with implicitly a different position of the camera (closer or further away), no ? Unless you do some kind of psychedelic deformation.

Anyway, I'd say you're technically correct but you might miss some angles and have some holes in the resulting images. But now with gaussian splats and AI we could reconstruct holes easily

In practice, you might struggle to do it well, but in principle, it could be a gigantic image sensor with no lens but a collimator on each pixel. You can angle the collimators to collect rays that would otherwise end up at the far-away camera.

Also, satellites photographing the Earth do it by moving the camera, and they can produce compression effects beyond what you'd get just because of their distance.

  • For satellite you’re talking about taking the same surface on earth from different angles as the satellite orbits ?