Comment by pshirshov
2 days ago
It won't work well if you deal with non ubuntu+cuda combination. Better just fail with a reasonable message.
2 days ago
It won't work well if you deal with non ubuntu+cuda combination. Better just fail with a reasonable message.
For now I'm re-directly people to our docs https://docs.unsloth.ai/basics/troubleshooting-and-faqs#how-...
But I'm working on more cross platform docs as well!
My current solution is to pack llama.cpp as a custom nix formula (the one in nixpkgs has the conversion script broken) and run it myself. I wasn't able to run unsloth on ROCM nor for inference nor for conversion, sticking with peft for now but I'll attempt again to re-package it.
Oh interesting oh for ROCM there are some installation instructions here: https://rocm.docs.amd.com/projects/ai-developer-hub/en/lates...
I'm working with the AMD folks to make the process easier, but it looks like first I have to move off from pyproject.toml to setup.py (allows building binaries)
2 replies →