Unfortunately uv is usually insufficient for certain ML deployments in Python. It's a real pain to install pytorch/CUDA with all the necessary drivers and C++ dependencies so people tend to fall back to conda.
Are there particular libraries that make your setup difficult? I just manually set the index and source following the docs (didn’t know about the auto backend feature) and pin a specific version if I really have to with `uv add “torch==2.4”`. This works pretty well for me for projects that use dgl, which heavily uses C++ extensions and can be pretty finicky about working with particular versions
This is in a conventional HPC environment, and I’ve found it way better than conda since the dependency solves are so much faster and I no longer experience PyTorch silently getting downgraded to cpu version of I install a new library. Maybe I’ve been using conda poorly though?
Unfortunately uv is usually insufficient for certain ML deployments in Python. It's a real pain to install pytorch/CUDA with all the necessary drivers and C++ dependencies so people tend to fall back to conda.
Any modern tips / life hacks for this situation?
Are there particular libraries that make your setup difficult? I just manually set the index and source following the docs (didn’t know about the auto backend feature) and pin a specific version if I really have to with `uv add “torch==2.4”`. This works pretty well for me for projects that use dgl, which heavily uses C++ extensions and can be pretty finicky about working with particular versions
This is in a conventional HPC environment, and I’ve found it way better than conda since the dependency solves are so much faster and I no longer experience PyTorch silently getting downgraded to cpu version of I install a new library. Maybe I’ve been using conda poorly though?
You should give a try to https://pixi.sh/latest/ (I am not involve in the project).
They are a little more focus on scientific computing than uv, which is more general. They might be a better option in your case.
https://docs.astral.sh/uv/guides/integration/pytorch/#automa...
doesn't work?
the problem is that you still need to install all the low level stuff manually, conda does it automatically
2 replies →
Would it be possible to use Docker to manage native dependencies?
It would be a fun callback if the demo was a factorial server:
https://joearms.github.io/#2013-11-21%20My%20favorite%20Erla...