Comment by chrisischris
12 hours ago
I was hitting Claude Code's rate limit pretty often while paying for their max subscription. Started thinking – I've got a decent GPU sitting at home doing nothing most of the day.
So I'm building a distributed AI inference platform where you can run models on your own hardware and access it from anywhere privately. Keep your data on infrastructure you control, but also leverage a credit system to tap into more powerful compute when you need it. Your idle GPU time can earn credits for accessing bigger models. The goal is making it dead simple to use your home hardware from wherever you're working.
It's for anyone who wants infrastructure optionality: developers who don't want vendor lock-in, businesses with compliance requirements, or just people who don't want their data sent to third parties.
Get notified when we launch: https://sporeintel.com
I don't want to get into blockchain-shenanigans but I did love the Folding@home model for democratized compute. Could spare cycles on GPUs at home be used for a P2P network of inferencing?
Yeah! not building a blockchain but we are building a system similar to folding at home (also a fan of that) where if you choose to have your node process others peoples requests you will get credits that you can then use to spend on other people nodes on the network. This is so you can still be able to use models that may be too big for your own hardware when you need them. Overall goal being to prevent people getting priced out of AI and providing and alternative way of accessing these larger models.
Couldn't you do that with Ollama + Tailscale? Minus some of the fancy bells and whistles?
Yea that would be the best current solution out there and would work ok for one person with one computer but the average person isn't going to set that up and it isn't easy to use from any device just by logging into a website or app. And if you had multiple machines you'd have to manually load balance by switching between them. Spore will allow you to easily set up as many nodes running models as you want and manage what models they are serving from anywhere, this allows for simple servicing of an organization or your family for example.