← Back to context

Comment by ukoki

5 hours ago

Not OP but I'm currently make a city-builder computer game with a large procedurally-generated world. The terrain height at any point in the world is defined by function that takes a small number of constant parameters, and the horizontal position in the world, to give the height of the terrain at that position.

I need the heights on the GPU so I can modify the terrain meshes to fit the terrain. I need the heights on the CPU so I can know when the player is clicking the terrain and where to place things.

Rather than generating a heightmap on the CPU and passing a large heightmap texture to the GPU I have implemented the identical height generating functions in rust (CPU) and webgl (GPU). As you might imagine, its very easy for these to diverge and so I have to maintain a large set of tests that verify that generated heights are identical between implementations.

Being able to write this implementation once and run it on the CPU and GPU would give me much better guarantees that the results will be the same. (although necause of architecture differences and floating point handling they the results will never be perfect, but I just need them to be within an acceptable tolerance)

That's a good application but likely not one requiring a full standard library on the GPU? Procedurally generated data on GPU isn't uncommon AFAIK. It wasn't when I was dabbling in GPGPU stuff ~10 years ago.

If you wrote in open cl, or via intel libraries, or via torch or arrayfire or whatever, you could dispatch it to both CPU and GPU at will.