Comment by taneq

14 days ago

Unless something’s changed you will need the whole model on the HPU anyway, no? So way beyond a 4090 regardless.

You can still offload most of the model to RAM and use the GPU for compute, but it's obviously much slower than what it would be if everything was on the GPU memory.

see ktransformers: https://www.reddit.com/r/LocalLLaMA/comments/1jpi0n9/ktransf...

A habana just for inference? Are you sure?

Also I see the 4 bit quants put it at a h100 which is fine ... I've got those at work. Maybe there will be distilled for running at home