Comment by yieldcrv
7 hours ago
so basically despite the higher resource requirements like 10TB of data for 30 minutes of footage, the compositing is so much faster and more flexible and those resources can be deleted or moved to long term storage in the cloud very quickly and the project can move on
fascinating
I wouldn't have normally read this and watched the video, but my Claude sessions were already executing a plan
the tl;dr is that all the actors were scanned in a 3D point cloud system and then "NeRF"'d which means to extrapolate any missing data about their transposed 3D model
this was then more easily placed into the video than trying to compose and place 2D actors layer by layer
Gaussian splatting is not NeRF (neural radiance field), but it is a type of radiance field, and supports novel view synthesis. The difference is in an explicit point cloud representation (Gaussian splatting), versus a process that needs to be inferred by a neural network.
It's not a type of radiance field.
It’s literally the name of gaussian splatting. 3D Gaussian Splatting for Real Time Radiance Fields
https://repo-sam.inria.fr/fungraph/3d-gaussian-splatting/
> and then "NeRF"'d which means to extrapolate any missing data about their transposed 3D model
Not sure if it's you or the original article but that's a slightly misleading summary of NeRFs.
I'm all for the better summary