← Back to context Comment by lostmsu 5 days ago You don't have to install the whole CUDA. They have a redistributable. 3 comments lostmsu Reply tom_0 3 days ago Oh, I can't believe I missed that! That makes whisper.cpp and llama.cpp valid options if the user has Nvidia, thanks. lostmsu 3 days ago Whisper.cpp and llama.cpp also work with Vulkan. tom_0 2 days ago Yeah, I researched this and I absolutely missed this whole part. To my defense I looked into this in 2023 which is ages ago :) Looks like local models are getting much more mature.
tom_0 3 days ago Oh, I can't believe I missed that! That makes whisper.cpp and llama.cpp valid options if the user has Nvidia, thanks. lostmsu 3 days ago Whisper.cpp and llama.cpp also work with Vulkan. tom_0 2 days ago Yeah, I researched this and I absolutely missed this whole part. To my defense I looked into this in 2023 which is ages ago :) Looks like local models are getting much more mature.
lostmsu 3 days ago Whisper.cpp and llama.cpp also work with Vulkan. tom_0 2 days ago Yeah, I researched this and I absolutely missed this whole part. To my defense I looked into this in 2023 which is ages ago :) Looks like local models are getting much more mature.
tom_0 2 days ago Yeah, I researched this and I absolutely missed this whole part. To my defense I looked into this in 2023 which is ages ago :) Looks like local models are getting much more mature.
Oh, I can't believe I missed that! That makes whisper.cpp and llama.cpp valid options if the user has Nvidia, thanks.
Whisper.cpp and llama.cpp also work with Vulkan.
Yeah, I researched this and I absolutely missed this whole part. To my defense I looked into this in 2023 which is ages ago :) Looks like local models are getting much more mature.