← Back to context

Comment by infecto

1 day ago

Sure but we were talking about gaming rigs to run models locally. You are describing some extreme edge folks that are keeping 24/7 work on gaming rigs in your home.

> Sure but we were talking about gaming rigs to run models locally. You are describing some extreme edge folks that are keeping 24/7 work on gaming rigs in your home.

In that scenario the case is even weaker for the rented-hardware model - if you're going to have a gaming rig, you're only paying a little bit more on top for a GPU with more RAM, not the full cost of the rig.

The comparison then is the extra cost of using a 24GB GPU over a standard gaming rig GPU (12GB? 8GB?) versus the cost of renting the GPU whenever you need it.

  • Honestly not sure what you are talking about.

    I could either spend $20 a month for my cursor license.

    Or

    Spend $2k+ upfront to build a machine to run models locally. Pay for the electricity cost and time to set both the machine and software up.

    • > Spend $2k+ upfront to build a machine to run models locally.

      You said this was in the context of a gaming rig. You're not spending an extra $2k on your gaming rig to run models locally.

      If you're building a dedicated LLM machine OR you're using less compute than you are paying the provider for, then, yup - $20/m is cheaper.

      When you start using the model more, or if you're already building a gaming rig, then it's going to be cheaper to self-host.

      1 reply →