← Back to context

Comment by pbronez

1 day ago

The linked paper describes a pipeline that starts with “point cloud from SfM” so they’re assuming away this problem at the moment.

Is it possible to handle SfM out of band? For example, by precisely measuring the location and orientation of the camera?

The paper’s pipeline includes a stage that identifies the in-focus area of an image. Perhaps you could use that to partition the input images. Exclusively use the in-focus areas for SfM, perhaps supplemented by out of band POV information, then leverage the whole image for training the splat.

Overall this seems like a slow journey to building end-to-end model pipelines. We’ve seen that in a few other domains, such as translation. It’s interesting to see when specialized algorithms are appropriate and when a unified neural pipeline works better. I think the main determinant is how much benefit there is to sharing information between stages.

You can definitely feed camera intrinsic (lens, sensor size..) and extrinsic (position, rotation..) into the SfM. While the intrinsic are very useful the extrinsic not actually that much. In no way can you measure the rotation good enough, to get subpixel accuracy. The position can be useful as an initial guess, but I found it more hassle than worth it. If the images track well, have enough overlap, you can get exact tracking out of them without dealing with extrinsic. If they don't track well, extrinsic won't save you. That was at least my experience.