← Back to context

Comment by markisus

2 days ago

I had to make a lot of concessions to make this work in real-time. There is no way that I know to replicate the fidelity of "actual" Gaussian splatting training process within the 33ms frame budget.

However, I have not baked in the size or orientation into the system. Those are "chosen" by the neural net based on the input RGBD frames. The view dependent effects are also "chosen" by the neural net, but not through an explicit radiance field. If you run the application and zoom in, you will be able to see the splats of different sizes pointing in different directions. The system as limited ability to re-adjust the positions and sizes due to the compute budget leading to the pixelated effect.

I've uploaded a screenshot from LiveSplat where I zoomed in a lot on a piece of fabric. You can see that there is actually a lot of diversity in the shape, orientation, and opacity of the Gaussians produced [1].

[1] https://imgur.com/a/QXxCakM