← Back to context

Comment by danielmarkbruce

5 hours ago

You wildly misunderstand pytorch.

What is there to misunderstand? It doesn't even install properly most of the time on my machine. You have to use a specific python version.

I gave up on all tools that depend on it for inference. llama-cpp compiles cleanly on my system for Vulkan. I want the same simplicity to test model training.

  • pytorch is as easy as you are going to find for your exact use case. If you can't handle the requirement of a specific version of python, you are going to struggle in software land. ChatGPT can show you the way.

    • I have been doing this for 25 years and no longer have the patience to deal with stuff like this. I am never going to install Arch from scratch by building the configuration by hand ever again. The same with pytorch and rocm.

      Getting them to work and recognize my GPU without passing arcane flags was a problem. I could at least avoid the pain with llama-cpp because of its vulkan support. pytorch apparently doesn't have a vulkan backend. So I decided to roll out my own wgpu-py one.

      2 replies →