Comment by akomtu
18 hours ago
You don't need webgpu for that. It's a standard vertex shader -> fragment shader pass with the blending mode set to addition.
18 hours ago
You don't need webgpu for that. It's a standard vertex shader -> fragment shader pass with the blending mode set to addition.
Drawing lots of single pixels with alpha blending is probably one of the least efficient ways to use the rasterizer though. A good compute shader implementation would be substantially faster.
At 1M points it hardly makes a difference. Besides, 1 point -> 1 pixel mapping is good enough for a demo, but in practice it will produce nasty aliasing artifacts because real datasets aren't aligned with pixel coordinates. So you have to draw each point as a 2x2 square at least with precise shading, and we are back to the rasterizer pipeline. Edit: what actually needs to be computed is the integral of the points dataset over each square pixel, and that depends on the shape of each point, even if it's smaller than a pixel.
Aren't we at petaflops now with GPUs? 1M or even 1G points should be no issue if it renders to a framebuffer and doesn't go through mountains af JS framework rubbish followed by mountains of GTK/Qt/.NET rubbish.
3 replies →
That works if more overdraw = more intensity is all you care about, and may very well be good enough for many kinds of charts. But with heat map plots one usually wants a proper mapping of some intensity domain to a color map and a legend with a color gradient that tells you which color represents which value. Which requires binning, counting per bin, and determining the min and max values.
Emm.. no, you just do one render pass to a temp framebuffer with 1 red channel, then another fragment shader maps it to an RGB palette.
Wait, does additional blending let you draw to temp framebuffers with high precision and without clamping? Even so you'd still need to know the maximum value of the temp framebuffer though.