Comment by darhodester
10 hours ago
I think it's because they already had proven capture hardware, harvest, and processing workflows.
But yes, you can easily use iPhones for this now.
10 hours ago
I think it's because they already had proven capture hardware, harvest, and processing workflows.
But yes, you can easily use iPhones for this now.
Looks great by the way, i was wondering if there’s a file format for volumetric video captures
https://developer.apple.com/av-foundation/
https://developer.apple.com/documentation/spatial/
Edit: As I'm digging, this seems to be focused on stereoscopic video as opposed to actual point clouds. It appears applications like cinematic mode use a monocular depth map, and their lidar outputs raw point cloud data.
A LIDAR point cloud from a single point of view is a mono-ocular depth map. Unless the LIDAR in question is like, using supernova level gamma rays or neutrino generators for the laser part to get density and albedo volumetric data for its whole distance range.
You just can't see the back of a thing by knowing the shape of the front side with current technologies.
Some companies have a proprietary file format for compressed 4D Gaussian splatting. For example: https://www.gracia.ai and https://www.4dv.ai.
Check this project, for example: https://zju3dv.github.io/freetimegs/
Unfortunately, these formats are currently closed behind cloud processing so adoption is a rather low.
Before Gaussian splatting, textured mesh caches would be used for volumetric video (e.g. Alembic geometry).
Recording pointclouds over time i guess i mean. I’m not going to pretend to understand video compression, but could it be possible to do the following movement aspect in 3d the same as 2d?