← Back to context

Comment by EarlKing

4 days ago

Well, if the user isn't going to be sharing the GPU with another task then you could push things back to install-time. In other words: At install time you conduct a benchmark on the relevant shaders, rewrite as necessary, recompile, and save the results accordingly. Now the user has a version of your shaders optimized to their particular configuration. Since installation times are already somewhat lengthy anyway you can be reasonably certain that no one is going to miss an extra minute or two needed to conduct benchmarks, especially if it results in installing optimized code.

Coming from the neural network world, rather than the shader world, but: I'd say you're absolutely right!

Right now NNs and their workloads are changing quickly enough that people tend to prefer runtime optimization (like the dynamic/JIT compilation provided by Torch's compiler), but when you're confident you understand the workload and have the know-how, you can do static compilation (e.g. with ONNX, TensorRT).

I work on a serverless infrastructure product that gets used for NN inference on GPUs, so we're very interested in ways to amortize as much of that compilation and configuration work as possible. Maybe someday we'll even have something like what Redshift has in their query engine -- pre-compiled binaries cached across users.

This reminds me of the dreaded Vulkan Shaders Compilation dialog when you try to play some games after driver update.

  • People complain a lot about shader compilation, but shader compilation on start-up is much nicer than when a game doesn't do that ahead of time and does it when you need those shaders.