← Back to context

Comment by Fnoord

2 days ago

This requires a hat.

See my other comment regarding efficiency of my Intel Xe iGPU.

Jetson is a different league though. These can run even LLMs (tho 16 GB version was overpriced when I bought during covid, so went for 8 GB). Ollama Just Works (tm); now compared to getting Ollama working with ROCm on my 6700 XT however, that was frustrating.

So, object detection with Tensorflow, works well w/these Coral TPUs. However, you can forget even running Whisper.cpp

One nice thing the Coral USB has for it though is it is USB. You can get it to work on practically any machine. Great for demos.

For old version of Python fire up a VM, OCI, use a decent package manager like uv or pipx.

Using Ollama/llama.cpp with Vulkan is both much easier, and works across more GPUs, than ROCm. I wish they'd merge the PR that adds it across the board :(