Comment by thih9

21 hours ago

How much does it cost to run these?

I see mentions of Claude and I assume all of these tools connect to a third party LLM api. I wish these could be run locally too.

You can run openclaw locally against ollama if you want. But the models that are distilled/quantized enough to run on consumer hardware can have considerably poorer quality than full models.

  • Also more vulnerable to prompt injection than the frontier models, which are still vulnerable, but less so.

You need very high-end hardware to run the largest SOTA open models at reasonable latency for real-time use. The minimum requirements are quite low, but then responses will be much slower and your agent won't be able to browse the web or use many external services.

$3k Ryzen ai-max PCs with 128GB of unified ram is said to run this reasonably well. But don't quote me on it.