← Back to context

Comment by pshirshov

3 days ago

My current solution is to pack llama.cpp as a custom nix formula (the one in nixpkgs has the conversion script broken) and run it myself. I wasn't able to run unsloth on ROCM nor for inference nor for conversion, sticking with peft for now but I'll attempt again to re-package it.

Oh interesting oh for ROCM there are some installation instructions here: https://rocm.docs.amd.com/projects/ai-developer-hub/en/lates...

I'm working with the AMD folks to make the process easier, but it looks like first I have to move off from pyproject.toml to setup.py (allows building binaries)

  • Yes, it's trivial with the pre-built vllm docker, but I need a declarative way to configure my environment. The lack of prebuilt rocm wheels for vllm is the main hindrance for now but I was shocked to see the sudo apt-get in your code. Ideally, llama.cpp should publish their gguf python library and the conversion script to pypi with every release, so you can just add that stuff as a dependency. vllm should start publishing a rocm wheel, after that unsloth would need to start publishing two versions - a cuda one and a rocm one.