Comment by Grosvenor
1 year ago
What I want is a Linear Algebra interface - As Gilbert Strang taught it. I'll "program" in LinAlg, and a JIT can compile it to whatever wonky way your HW requires.
I'm not willing to even know about the HW at all, the higher level my code the more opportunities for the JIT to optimize my code.
What I really want is something like Mathematica that can JIT to GPU.
As another commenter mentioned all the API's assume you're a discrete GPU off the end of a slow bus, without shared memory. I would kill for an APU that could freely allocate memory for GPU or CPU and change ownership with the speed of a pagefault or kernel transition.
> What I really want is something like Mathematica that can JIT to GPU.
https://juliagpu.org/
https://github.com/jax-ml/jax
To expand on this link, this is probably the closest you're going to get to 'I'll "program" in LinAlg, and a JIT can compile it to whatever wonky way your HW requires.' right now. JAX implements a good portion of the Numpy interface - which is the most common interface for linear algebra-heavy code in Python - so you can often just write Numpy code, but with `jax.numpy` instead of `numpy`, then wrap it in a `jax.jit` to have it run on the GPU.
I was about to say that it is literally just Jax.
It genuinely deserves to exist alongside pytorch. It's not just Google's latest framework that you're forced to use to target TPUs.
Like, PyTorch? And the new Mac minis have 512gb of unified memory