← Back to context

Comment by spoaceman7777

7 hours ago

Free, downloadable AI models have consistently caught up to ChatGPT within 3 months, for almost a year now.

I highly encourage you to go and update your priors.

And how much does the hardware cost to run said models?

  • It can be quite expensive to get the models and machines to do this.

    That's what the money pays for when the Comment above mentions 'that you might have to eventually pay an AI company a large amount of money to ask ChatGPT such a question'

    Putting aside that it won't be a large amount of money For any particular query , that's how the AI companies see themselves, not as providers of information, but as providers of mechanisms that provide information. It is not selling the Information of others, it isn't selling information at all. They are selling the service of running the mechanism.

  • You can run them slowly on any machine that has enough memory.

    • And, to bolster your comment, you can still use this machine as your daily driver.

      I'm always going to have a machine anyway—might as well max out the RAM when I purchase another.

      (And so too I jumped on the Mac mini bandwagon a month or two back—64 GB. I'm enjoying pulling down the new models and putting them through my paces.)

  • How good do you want it to be? For a close to ChatGPT today (April, 2026), you're still looking at a system with 7xH200+chassis, which will run you $300, or a GB200 NV72, which is $2-3 million. OTOH, a Qwen3.6 quantized model can be run on $10,000 (high end Mac) or $1,000 (Mac mini) worth of hardware. Even a Pixel 10 Pro cellphone ($1,000) can run useful models locally.

    • Go to Open Router, ask your own in investigative prompt that meets your needs to all the top open models. See how they do. Then notice if you can run any of those locally. Repeat at least once a month.

      1 reply →