Comment by randomNumber7
7 days ago
Once you have enough GPUs to have your whole model available in GPU RAM you can do inference pretty fast.
As soon as you have enough users you can let your GPUs burn with a high load constantly, while your home solution would idle most of the time and therefore be way too expensive compared to the value.
No comments yet
Contribute on Hacker News ↗