Compute shaders, which can draw points faster than the native rendering pipeline. Although I have to admit that WebGPU implements things so poorly and restrictive, that this benefit ends up being fairly small. Storage buffers, which come along with compute shaders, are still fantastic from a dev convenience point of view since it allows implementing vertex pulling, which is much nicer to work with than vertex buffers.
For gaussian splatting, WebGPU is great since it allows implementing sorting via compute shaders. WebGL-based implementations sort on the CPU, which means "correct" front-to-back blending lags behind for a few frames.
But yeah, when you ask like that, it would have been much better if they had simply added compute shaders to WebGL, because other than that there really is no point in WebGPU.
Access to slightly more recent GPU features (e.g. WebGL2 is stuck on a feature set that was mainstream ca. 2008, while WebGPU is on a feature set that was mainstream ca 2015-ish).
The GL programming only feels 'natural' if you've been following GL development closely since the late 1990s and learned to accept all the design compromises for sake of backward compatibility. If you come from other 3D APIs and never touched GL before it's one "WTF were they thinking" after another (just look at VAOs as an example of a really poorly designed GL feature).
While I would have designed a few things differently in WebGPU (especially around the binding model), it's still a much better API than WebGL2 from every angle.
The limited feature set of WebGPU is mostly to blame on Vulkan 1.0 drivers on Android devices I guess, but there's no realistic way to design a web 3D API and ignore shitty Android phones unfortunately.
Compute shaders, which can draw points faster than the native rendering pipeline. Although I have to admit that WebGPU implements things so poorly and restrictive, that this benefit ends up being fairly small. Storage buffers, which come along with compute shaders, are still fantastic from a dev convenience point of view since it allows implementing vertex pulling, which is much nicer to work with than vertex buffers.
For gaussian splatting, WebGPU is great since it allows implementing sorting via compute shaders. WebGL-based implementations sort on the CPU, which means "correct" front-to-back blending lags behind for a few frames.
But yeah, when you ask like that, it would have been much better if they had simply added compute shaders to WebGL, because other than that there really is no point in WebGPU.
They did, Intel had the majority of the work done, and then Google sabotaged the effort, because WebGPU was right around the corner.
https://registry.khronos.org/webgl/specs/latest/2.0-compute/
https://github.com/9ballsyndrome/WebGL_Compute_shader/issues...
Access to slightly more recent GPU features (e.g. WebGL2 is stuck on a feature set that was mainstream ca. 2008, while WebGPU is on a feature set that was mainstream ca 2015-ish).
All of these new features could have easily be added to WebGL. There was no need for creating a fundamentally different API just for compute shaders.
The GL programming only feels 'natural' if you've been following GL development closely since the late 1990s and learned to accept all the design compromises for sake of backward compatibility. If you come from other 3D APIs and never touched GL before it's one "WTF were they thinking" after another (just look at VAOs as an example of a really poorly designed GL feature).
While I would have designed a few things differently in WebGPU (especially around the binding model), it's still a much better API than WebGL2 from every angle.
The limited feature set of WebGPU is mostly to blame on Vulkan 1.0 drivers on Android devices I guess, but there's no realistic way to design a web 3D API and ignore shitty Android phones unfortunately.
4 replies →