Comment by Lapel2742
2 days ago
>Only issue I have found with llama.cpp is trying to get it working with my amd GPU.
I had no problems with ROCm 6.x but couldn't get it to run with ROCm 7.x. I switched to Vulkan and the performance seems ok for my use cases
No comments yet
Contribute on Hacker News ↗