← Back to context

Comment by bdhcuidbebe

6 months ago

For Intel, OpenVINO should be the preferred route. I dont follow AMD, but Vulkan is just the common denominator here.

If you support Vulkan, you support almost every GPU out there in the consumer market across all hardware vendors. It's an amazing fallback option.

I agree they should also support OpenVINO, but compared to Vulkan OpenVINO is a tiny market.

  • I made an argument for performance, not for compatibility.

    If you run your local llm in the least performant way possible on tour overly expensive GPU, then you are not making value of your purchase.

    Vulkan is a fallback option is all.

    I even see people running on their CPU because some apps dont support their hardware and llama.cpp made it even possible. It is still a really bad idea.

    Its just goes to show there’s still much to do.

    • I'm willing to bet that Vulkan will outperform OpenVINO.

      Vulkan is the API right now in the graphics world. It's very well supported and actively being improved on. Everyone is pouring resources into making Vulkan better.

      OpenVINO feels barely developed. Intel never made it a proper backend for Pytorch like AMD did with ROCm. It's hard to see where it is going, or if it is going anywhere at all. Between Sycl and OneApi it's hard to see how much interest Intel has developing it.

      2 replies →