Comment by sbierwagen
10 hours ago
From the article:
>Evercoast deployed a 56 camera RGB-D array
Do you know which depth cameras they used?
10 hours ago
From the article:
>Evercoast deployed a 56 camera RGB-D array
Do you know which depth cameras they used?
We (Evercoast) used 56 RealSense D455s. Our software can run with any camera input, from depth cameras to machine vision to cinema REDs. But for this, RealSense did the job. The higher end the camera, the more expensive and time consuming everything is. We have a cloud platform to scale rendering, but it’s still overall more costly (time and money) to use high res. We’ve worked hard to make even low res data look awesome. And if you look at the aesthetic of the video (90s MTV), we didn’t need 4K/6K/8K renders.
You may have explained this elsewhere, but if not—-what kind of post processing did you do to upscale or refine the realsense video?
Can you add any interesting details on the benchmarking done against the RED camera rig?
Aha: https://www.red.com/stories/evercoast-komodo-rig
So likely RealSense D455.
I was not involved in the capture process with Evercoast, but I may have heard somewhere they used RealSense cameras.
I recommend asking https://www.linkedin.com/in/benschwartzxr/ for accuracy.
Kinect Azure
Couldn’t you just use iphone pros for this? I developed an app specifically for photogrammetry capture using AR and the depth sensor as it seemed like a cheap alternative.
EDIT: I realize a phone is not on the same level as a red camera, but i just saw iphones as a massively cheaper option to alternatives in the field i worked in.
ASAP Rocky has a fervent fanbase who's been anticipating this album. So I'm assuming that whatever record label he's signed to gave him the budget.
And when I think back to another iconic hip hop (iconic that genre) video where they used practical effects and military helicopters chasing speedboats in the waters off of Santa Monica...I bet they had change to spear.
Is there any reason to think https://thebaffler.com/salvos/the-problem-with-music doesn't apply here?
A single camera only captures the side of the object facing the camera. Knowing how far away that camera facing side of a Rubik's Cube help if you were making educated guesses(novel view synthesis), but it won't solve the problem of actually photographing the backside.
There are usually six sides on a cube, which means you need minimum six iPhone around an object to capture all sides of it to be able to then freely move around it. You might as well seek open-source alternatives than relying on Apple surprise boxes for that.
In cases where your subject would be static, such as it being a building, then you can wave around a single iPhone for the same effect for a result comparable to more expensive rigs, of course.
I think it's because they already had proven capture hardware, harvest, and processing workflows.
But yes, you can easily use iPhones for this now.
Looks great by the way, i was wondering if there’s a file format for volumetric video captures
4 replies →
Why would they go for the cheapest option?
It was more the point that technology is much cheaper. The company i worked for had completely missed it while trying to develop in house solutions.