Plays decently smooth on my M4 Max. It's probably still a long way from being a production-ready replacement for meshed environments, but I could imagine a hybrid mode where certain elements like grass and shrubbery are drawn with gaussians, perhaps with support for basic procedural animation. Great work with the playable demo!
Endless fields of grass and other things where you can make copies of a single base thing and just argue in some parameters like position, color, type etc are cheap to render. Making them sway or react to a body also isn't a problem.
> I think it won't be long before the whole world is mapped, and "playable".
People already don't want to use VR, why would they get into/allow scanning with even less immediate value?
I agree with the sprit though, I just think rendering the world is gonna happen from a few generations of iteration on world modeling tech like World Labs/Marble.
tbh I haven't yet approached optimisation for this, i am pretty sure it's possible to improve it further. It runs on my 2020 iphone, but not super smooth though
For me, the biggest issue this solves is the blank canvas paralysis problem. Artists are visual thinkers and need a little nudge in the right (art) direction. This is a great way to fill that blank sheet of paper with something that they can take and run with.
Editing Gaussian Splats is still a pain in the ass in the artist's perspective. Even if you can create a good-enough first try using scanned data or generative AI, you just end up with a rough draft that you cannot polish in any way. Existing mesh-based tools allow you to edit the geometry relatively easily, since they are in a higher level discrete representation rather than just a point cloud data structure.
Yeah, but this seems to be just a 3D GS video (captured from several different camera angles), similar to how an ordinary 2D video is just a series of still frames. For 3D games this would be unsuitable since animations often have to be generated on the fly based on game physics. Even for pre-baked animations the memory cost of loading each frame individually would be too inefficient. For polygon meshes you have just a single static mesh that is deformed over time.
> Dreams managed to animate splats on the PS4. Admittedly, not quite the same type of splats, but there is probably a middle ground here where it can be made to work
I'm pretty sure Dreams only allowed animations as translations and rotations, not something that approximates soft skeletal animations. And even translations and rotations would be problematic since 3D GS scenes rely on baked lighting which would then result in objects no longer fitting the scene.
Dreams managed to animate splats on the PS4. Admittedly, not quite the same type of splats, but there is probably a middle ground here where it can be made to work
dynamic objects are still largely unsolved problem, I just tried to approach it in this demo. also this particular place doesn't have reflective surfaces, but technology supports it - check for example this splat https://superspl.at/scene/ff1d0393 or this one https://superspl.at/scene/6c822f84
Playing in brave on my Moto g power 2024 was low fps, but as soon as i pressed the shoot button my whole screen went purple pixel distortion lines and I had to restart my phone.
Question for those making Splats...how do you get such large environments? I've been playing around with them a bit and I'm finding I'm running out of memory with surprisingly little built even on an RTX6000. Any tips or ideas would be awesome!
"Back in the day" people were afraid that pupils would create CS (beta 6.5) maps of their schools. Gaussian Splatting would have been very convenient for that :-)
Really cool. Out of curiosity, what's the per-frame cost of rendering
the splat scene compared to an equivalent triangle-mesh approximation?
I've been wondering when (if ever) splatting becomes the default for
web-delivered 3D content vs. just a research/SIGGRAPH-paper toy.
Browser support and file-size feel like the two big walls.
i think splats are good for browsers as it's just a render call with relatively simple shader - imagine modern traditional rendering pipeline, where it's thousands of different passes + post processing, shadows etc. Effective splat sorting is a different story in browsers though
This is a really neat bridge between “looks cool” and “feels like you’re there”. Inferring real life properties like lighting is a cool trick and just the beginning I’m sure. I’m excited to explore new and dynamic worlds and bring the AAA experience closer to something you can build yourself.
How practical would it be to include LIDAR in the initial real-world environmental scan to get (or at least seed/constrain with real data samples) an even better collision mesh?
I'm looking forward to seeing what will happen when gaussian splatting can be combined with DLSS 5. Gaussian splatting has a lot of potential in video games yet to be realised.
This for some reason reminded me of the "Killerspiele" debate [1] we had in Germany after a dramatic school shooting. The shooter had previously built a map of the school in Counter-Strike. With this it's not a long stretch from there to having a realistic map of a school... Which would have given him a better rating than the one he got for his map: "I'd like to see the school that actually has lighting like this." [2]
Hopefully this tech will never used for something like this.
Plays decently smooth on my M4 Max. It's probably still a long way from being a production-ready replacement for meshed environments, but I could imagine a hybrid mode where certain elements like grass and shrubbery are drawn with gaussians, perhaps with support for basic procedural animation. Great work with the playable demo!
Endless fields of grass and other things where you can make copies of a single base thing and just argue in some parameters like position, color, type etc are cheap to render. Making them sway or react to a body also isn't a problem.
It runs kinda well on the best computer in the world, yeah? :)
I have another data point: my ten year old ThinkPad. I get about 10 FPS. Lowering the quality doesn't seem to increase performance.
But I am amazed by what I am seeing, and amazed it runs at all!
I think it won't be long before the whole world is mapped, and "playable".
> I think it won't be long before the whole world is mapped, and "playable".
People already don't want to use VR, why would they get into/allow scanning with even less immediate value?
I agree with the sprit though, I just think rendering the world is gonna happen from a few generations of iteration on world modeling tech like World Labs/Marble.
tbh I haven't yet approached optimisation for this, i am pretty sure it's possible to improve it further. It runs on my 2020 iphone, but not super smooth though
For me, the biggest issue this solves is the blank canvas paralysis problem. Artists are visual thinkers and need a little nudge in the right (art) direction. This is a great way to fill that blank sheet of paper with something that they can take and run with.
Editing Gaussian Splats is still a pain in the ass in the artist's perspective. Even if you can create a good-enough first try using scanned data or generative AI, you just end up with a rough draft that you cannot polish in any way. Existing mesh-based tools allow you to edit the geometry relatively easily, since they are in a higher level discrete representation rather than just a point cloud data structure.
5 replies →
I mean... Just take a source photo and over paint it? Splats don't really get you closer to a workable model than concept art.
> but I could imagine a hybrid mode where certain elements like grass and shrubbery are drawn with gaussians
Those tend to move in the wind. Animations don't work well with splats. Or with any data structure except polygon meshes.
Edit:
Responding in the parent because HN says I'm "posting too fast" and should "slow down".
> Have you seen https://www.4dv.ai/
Yeah, but this seems to be just a 3D GS video (captured from several different camera angles), similar to how an ordinary 2D video is just a series of still frames. For 3D games this would be unsuitable since animations often have to be generated on the fly based on game physics. Even for pre-baked animations the memory cost of loading each frame individually would be too inefficient. For polygon meshes you have just a single static mesh that is deformed over time.
> Dreams managed to animate splats on the PS4. Admittedly, not quite the same type of splats, but there is probably a middle ground here where it can be made to work
I'm pretty sure Dreams only allowed animations as translations and rotations, not something that approximates soft skeletal animations. And even translations and rotations would be problematic since 3D GS scenes rely on baked lighting which would then result in objects no longer fitting the scene.
Have you seen https://www.4dv.ai/
2 replies →
> Animations don't work well with splats
Dreams managed to animate splats on the PS4. Admittedly, not quite the same type of splats, but there is probably a middle ground here where it can be made to work
2 replies →
Thanks! Yeah hybrid is a way forward, dynamic stuff is not easy
I’m trying to understand from the video why this is better, it looks like a normal high resolution textures with precooked shadow maps.
It has no dynamic lighting or effects, which makes the video look like a high quality game from 2006.
dynamic objects are still largely unsolved problem, I just tried to approach it in this demo. also this particular place doesn't have reflective surfaces, but technology supports it - check for example this splat https://superspl.at/scene/ff1d0393 or this one https://superspl.at/scene/6c822f84
This is better, but I think that a demo with more reflections and radiosity would be much more impressive
Playing in brave on my Moto g power 2024 was low fps, but as soon as i pressed the shoot button my whole screen went purple pixel distortion lines and I had to restart my phone.
Question for those making Splats...how do you get such large environments? I've been playing around with them a bit and I'm finding I'm running out of memory with surprisingly little built even on an RTX6000. Any tips or ideas would be awesome!
I saw a comment by the author that they did it as 9 scans with Reality Capture which presumably then were combined with some post-processing.
look at step 2 about streamed splats
Sort of unfortunate that one ends up putting normal meshed characters that clash with the photorealistic splat environment
Probably for the best as, well, they are being pumped with lead.
idealy it should be 4DGS but we are far away from it - real actors scan etc.. but somebody will do it later i am sure
>Going 3D at that time in history meant that the quality of the graphic would take a huge hit, as well as the rendering speed, and fewer people would be able to run it because it would require a high end computer, so it was just not worth it.fho
11 hours ago
yak32
8 hours ago
marlburrow
11 hours ago
yak32
8 hours ago
timfsu
12 hours ago
buibuibui
9 hours ago
yak32
8 hours ago
xnx
17 hours ago
yak32
16 hours ago
freakynit
17 hours ago
yak32
8 hours ago
freakynit
4 hours ago
zer0tonin
10 hours ago
make3
10 hours ago
yak32
8 hours ago
pphysch
15 hours ago
yak32
8 hours ago
poly2it
19 hours ago
ulfen
18 hours ago
handsometong
15 hours ago
MethosPi
13 hours ago
4ndrewl
19 hours ago
>Using 2D pre-rendered sprites means that the artists can use as many polygons, rich textures and lighting techniques as they want in 3D Studio Max, and tweak them until the sprites look perfect, and that's exactly what the user sees. You just could not approach anywhere near that quality with 3D graphics at the time. Of course things are a lot different now!
>That was during the time that The Sims was also in development. One reason The Sims was successful is that it did not try to be full 3D, and ran well on low-end computers (the old computer that little sister inherits from big brother when he upgrades to a gaming machine). It used a hybrid 2D/3D system of z-buffered sprites, with an orthographic projection constrained to four rotations, three zooms, and only the characters were rendered with polygons into the pre-rendered z-buffered scene, using DirectX's software renderer.
>I developed the character animation system and content creation tools for The Sims, and when the EA executives were reviewing the technology to decide if they should buy Maxis, to justify our approach I bought them a copy of Scott McCloud's book Understanding Comics, which explained a concept called "masking" --
"Back in the day" people were afraid that pupils would create CS (beta 6.5) maps of their schools. Gaussian Splatting would have been very convenient for that :-)
you would need a professional artist, several days and professional expensive equipment. It's not an easy task.
Really cool. Out of curiosity, what's the per-frame cost of rendering the splat scene compared to an equivalent triangle-mesh approximation? I've been wondering when (if ever) splatting becomes the default for web-delivered 3D content vs. just a research/SIGGRAPH-paper toy. Browser support and file-size feel like the two big walls.
i think splats are good for browsers as it's just a render call with relatively simple shader - imagine modern traditional rendering pipeline, where it's thousands of different passes + post processing, shadows etc. Effective splat sorting is a different story in browsers though
This is a really neat bridge between “looks cool” and “feels like you’re there”. Inferring real life properties like lighting is a cool trick and just the beginning I’m sure. I’m excited to explore new and dynamic worlds and bring the AAA experience closer to something you can build yourself.
This is so cool. I have a Insta360 camera lying around. Is it possible to use it to create a Gaussian Splat map with open source software?
I heard Insta360 is good for this.
Strikingly similar to the never completed Unrecord from 3 years ago: https://www.youtube.com/watch?v=IK76q13Aqt0
there another game, Bodycam, released on Steam - https://store.steampowered.com/app/2406770/Bodycam/ - but it's Unreal engine 5. This demo runs on my old 2020 iPhone, in browser
This tech reminds me of that Source Code(2011) movie for some reason.
some game mode in Cyberpunk 2077 too
Oh yea.. I remember that now...
Mission "The Information" based on a tech named "Braindance".. Thanks for reminding me of this.. was a crazy experience the first time I played.
This also now reminds me of Total Recall(2012) movie, especially that "Rekall" scene.
Really cool looking stuff.
I wonder how computationally expensive it would be to add ray tracing for the non splat surfaces (not even mutually)
i've seen demos with splats and raytracing - this one for example https://gaussiantracer.github.io/
How practical would it be to include LIDAR in the initial real-world environmental scan to get (or at least seed/constrain with real data samples) an even better collision mesh?
XGrid Portal camera has Lidar, and this scan is made with it as I remember
I'm looking forward to seeing what will happen when gaussian splatting can be combined with DLSS 5. Gaussian splatting has a lot of potential in video games yet to be realised.
This for some reason reminded me of the "Killerspiele" debate [1] we had in Germany after a dramatic school shooting. The shooter had previously built a map of the school in Counter-Strike. With this it's not a long stretch from there to having a realistic map of a school... Which would have given him a better rating than the one he got for his map: "I'd like to see the school that actually has lighting like this." [2] Hopefully this tech will never used for something like this.
[1] https://de.wikipedia.org/wiki/Killerspiel [2] https://www.spiegel.de/netzwelt/web/schuelerhobby-mapping-me...
[dead]
[dead]
[flagged]