← Back to context

Comment by MindSpunk

19 hours ago

Drawing lots of single pixels with alpha blending is probably one of the least efficient ways to use the rasterizer though. A good compute shader implementation would be substantially faster.

At 1M points it hardly makes a difference. Besides, 1 point -> 1 pixel mapping is good enough for a demo, but in practice it will produce nasty aliasing artifacts because real datasets aren't aligned with pixel coordinates. So you have to draw each point as a 2x2 square at least with precise shading, and we are back to the rasterizer pipeline. Edit: what actually needs to be computed is the integral of the points dataset over each square pixel, and that depends on the shape of each point, even if it's smaller than a pixel.

  • No difference for human visuals or no difference for discrete data or no difference for "continuous" f32 data?

  • Aren't we at petaflops now with GPUs? 1M or even 1G points should be no issue if it renders to a framebuffer and doesn't go through mountains af JS framework rubbish followed by mountains of GTK/Qt/.NET rubbish.

    • Not true. Fill rate and memory speed is still a huge bottleneck. The issue is not “rubbish” but memory speed. It is almost always memory speed, cache, ram, disk etc.

      There is this misconception that if one uses js or c# to tell a gpu what to do it is somehow slower than rust. It only is if you crunching data but moving memory to the gpu and telling gpu to crunch is virtually identical.

      2 replies →