← Back to context Comment by treksis 5 hours ago how fast is this compare to python based? 4 comments treksis Reply antirez 3 hours ago Very slow currently, I added the benchmarks in the README. To go faster it needs to implement inference faster than the current float32-only kernels. rcarmo 4 hours ago The Python libraries are themselves written in C/C++, so what this does performance-wise is, at best, cutting through some glue. Don't think about this as a performance-driven implementation. throwaway314155 3 hours ago PyTorch MPS is about 10x faster per the README.md. antirez 2 hours ago I cut the difference in speed by half by taking the activations on the GPU. Time to sleep but will continue tomorrow.
antirez 3 hours ago Very slow currently, I added the benchmarks in the README. To go faster it needs to implement inference faster than the current float32-only kernels.
rcarmo 4 hours ago The Python libraries are themselves written in C/C++, so what this does performance-wise is, at best, cutting through some glue. Don't think about this as a performance-driven implementation.
throwaway314155 3 hours ago PyTorch MPS is about 10x faster per the README.md. antirez 2 hours ago I cut the difference in speed by half by taking the activations on the GPU. Time to sleep but will continue tomorrow.
antirez 2 hours ago I cut the difference in speed by half by taking the activations on the GPU. Time to sleep but will continue tomorrow.
Very slow currently, I added the benchmarks in the README. To go faster it needs to implement inference faster than the current float32-only kernels.
The Python libraries are themselves written in C/C++, so what this does performance-wise is, at best, cutting through some glue. Don't think about this as a performance-driven implementation.
PyTorch MPS is about 10x faster per the README.md.
I cut the difference in speed by half by taking the activations on the GPU. Time to sleep but will continue tomorrow.