Comment by buyucu
6 months ago
llama.cpp already supports Vulkan. This is where all the hard work is at. Ollama hardly does anything on top of it to support Vulkan. You just check if the libraries are available, and get the available VRAM. That is all. It is very simple.
No comments yet
Contribute on Hacker News ↗