← Back to context

Comment by dent9

7 days ago

You can get used RTX 3090 for $750-800 each. Pro tip; look for 2.5 slot sized models line EVGA XC3 or the older blower models. Then you can get two for $1600, fit them in a full size case, 128GB DDR5 for $300, some Ryzen CPU like the 9900X and a mobo and case and PSU to fill up the rest of the budget. If you want to skimp you can drop one of the GPUs until you're sure you need 48GB VRAM and some of the RAM but you really don't save that much. Just make sure you get a case that can fit multiple full size GPU and a mobo that can support it as well. The slot configurations are pretty bad on the AM5 generation for multi GPU. You'll probably end up with a mobo such as Asus ProArt

Also none of this is worth the money because it's simply not possible to run the same kinds of models you pay for online on a standard home system. Things like ChatGPT 4o use more VRAM than you'll ever be able to scrounge up unless your budget is closer to $10,000-25,000+. Think multiple RTX A6000 cards or similar. So ultimately you're better off just paying for the online hosted services

I think this proves one of the suckpoints of AI : there are clearly certain things that the smaller models should be fine at... but there doesn't seem to be frameworks or something that constantly analyze and simulate and evaluate what you could be doing with smaller and cheaper models

Of course the economics are completely at odds with any real engineering: nobody wants you to use smaller local models, nobody wants you to consider cost/efficiency saving

  • > but there doesn't seem to be frameworks or something that constantly analyze and simulate and evaluate what you could be doing with smaller and cheaper models

    This is more of a social problem. Read through r/LocalLlama every so often and you'll see how people are optimizing their usage.