← Back to context

Comment by kristopolous

2 hours ago

I tried a few these ... they are pretty slow. If you are looking for free inference you'd have to be pretty desperate.

example:

$ OLLAMA_HOST=http://47.101.61.248:9000/ ollama run gemma3:27b "outline ww2"

Many appear to be proxies. I'm familiar with some "serverless" architectures that do things like this https://www.shodan.io/host/34.255.41.58 ... you can see this has a bunch of ollama ports running really really old versions

You can pull down "new" manifests but very few ollamas are new enough for decent modern models like glm-4.7-flash. The free tier for the kimi-k2.5:cloud is going to be far more useful then pasting these into you OLLAMA_HOST variable.

I think the real headline is: "thousands of slow machines running mediocre small models from last year. Totally open..."

Anyways, if codellama:13b is your jam, go wild I guess.