← Back to context

Comment by danielmarkbruce

10 hours ago

Unwilling to deal with pytorch? You couldn't possibly hobble yourself anymore if you tried.

If you want to train/sample large models, then use what the rest of the industry uses.

My use case is different. I want something that I can run quickly on one GPU without worrying about whether it is supported or not.

I am interested in convenience, not in squeezing out the last bit of performance from a card.

  • You wildly misunderstand pytorch.

    • What is there to misunderstand? It doesn't even install properly most of the time on my machine. You have to use a specific python version.

      I gave up on all tools that depend on it for inference. llama-cpp compiles cleanly on my system for Vulkan. I want the same simplicity to test model training.

      4 replies →

    • I suspect the OP's issues might be mostly related to the ROCM version of PyTorch. AMD still can't get this right.

      1 reply →