Comment by echelon

1 day ago

OP, this is incredible. I worry that people might see a "glitchy 3D video" and might not understand the significance of this.

This is getting unreal. They're becoming fast and high fidelity. Once we get better editing capabilities and can shape the Gaussian fields, this will become the prevailing means of creating and distributing media.

Turning any source into something 4D volumetric that you can easily mold as clay, relight, reshape. A fully interactable and playable 4D canvas.

Imagine if the work being done with diffusion models could read and write from Gaussian fields instead of just pixels. It could look like anything: real life, Ghibli, Pixar, whatever.

I can't imagine where this tech will be in five years.

Thanks so much! Even when I was putting together the demo video I was getting a little self-critical about the visual glitches. But I agree the tech will get better over time. I imagine we will be able to have virtual front row seats at any live event, and many other applications we haven't thought of yet.

  • > I imagine we will be able to have virtual front row seats at any live event, and many other applications we haven't thought of yet.

    100%. And style-transfer it into steam punk or H.R. Giger or cartoons or anime. Or dream up new fantasy worlds instantaneously. Explore them, play them, shape them like Minecraft-becomes-holodeck. With physics and tactile responses.

    I'm so excited for everything happening in graphics right now.

    Keep it up! You're at the forefront!

I know enough about 3D rendering to know that Gaussian splatting's one of the Big New Things in high-performance rendering, so I understand that this is a big deal -- but I can't quantify why, or how big a deal it is.

Could you or someone else wise in the ways of graphics give me a layperson's rundown of how this works, why it's considered so important, and what the technical challenges are given that an RGB+D(epth?) stream is the input?

  • Gaussian Splatting allows you to create a photorealistic representation of an environment from just a collection of images. Philosophically, this is a form of geometric scene understanding from raw pixels, which has been a holy grail of computer vision since the beginning.

    Usually creating a Gaussian splat representation takes a long time and uses an iterative gradient-based optimization procedure. Using RGBD helps me sidestep this optimization, as much of the geometry is already present in the depth channel and so it enables the real-time aspect of my technique.

    When you say "big deal", I imagine you are also asking about business or societal implications. I can't really speak on those, but I'm open to licensing this IP to any companies which know about big business applications :)

    • So, is there some amount of gradient-based optimization going on here? I see RGBD input, transmission, RGBD output. But, other than multi-camera registration, it's difficult to determine what processing took place between input and transmission. What makes this different from RGBD camera visualizations from 10 years ago?

      3 replies →

    • Thanks! That makes a lot of sense, I might dig into this after work some more.

      By "big deal," I meant more for people specializing around computer graphics, computer vision, or even narrower subfields of either of those two -- a big deal from an academic interest perspective.

      Sure, this might also have implications in society and business, but I'm a nerd, and I appreciate a good nerding out over something cool, niche, and technically impressive.