Comment by LightMachine
2 years ago
TBH, I'm somewhat of a Bitcoin maximalist too, which is kinda ironic since I hold no BTC; I guess the lack of expressivity of the network counters the big appreciation I have for its stable monetary policy and true decentralization. That's why, on HOC, I only accepted external funding under the conditions that Kindelia may not have a pre-mine - and am very glad to have found VCs that share this vision. (Kindelia is one of our internal projects, and is essentially Bitcoin with HVM-based contracts.)
To address your questions:
Yes, we have plans for GPU backends. In fact, I have even written a working prototype some time ago! It reduces λ-terms on the GPU using the same rules as HVM, with all the locks and atomics in place, and it seems to achieve a near ideal speedup, even with thousands of nVidia cores:
https://gist.github.com/VictorTaelin/e924d92119eab8b1f57719a...
That said, it is still just a single-file prototype. Sadly, we couldn't include a GPU backend on this funding round, but it is definitely something we'll be investing in a future, specially if we manage to grow as a company. Imagine writing pure Haskell functions and they run in thousands of GPU cores with no efforts? Pure functional shaders, physics engines...
Regarding PyTorch, I don't think HVM would be more efficient than it for ML, because PyTorch is already maximally optimized to use GPUs to their limit. HVM should be seen as a way to run high level programs in a massively parallel fashion without needing to write low level CUDA code yourself, but it won't outperform manually optimized CUDA on GPUs. That said, I do believe interaction net based processors would greatly outperform GPUs by breaking the Von Neumann bottleneck and unifying memory and computation in billions of nano-interaction-cores. I do believe such architecture could one day empower AI and make our LLMs and RNNs much faster.
No comments yet
Contribute on Hacker News ↗