← Back to context

Comment by Rebelgecko

10 hours ago

It seems to support connecting to your own LLM on the same LAN

The point is the agent is still the LLM. No LLM, no agent.

  • LLMs are not agents. LLMs are language models that simply respond to a text prompt with a textual response. Agents are middleware that take input from the user and then use LLMs to drive tools.

I tried connecting OpenClaw to ollama with a V100 running qwen3.5:35b but it was really, really, really slow (despite ollama itself feeling fairly fast).

These "claw" agents really multiply the tokens used by an obscenely huge factor for the same request.

  • i recently decided to get into this ocean boiling game too, the 32GB V100 seems like a pretty good VRAM/$. if i may ask, do you make any special accommodations for cooling? i've never dealt with a passively cooled card before and i'm curious whether my workstation fans (HP Z840) will be sufficient. i'm going to try 2 cards at first but i think i might be able to squeeze a third in there

    • No. I have a Titan V CEO edition, which is basically a 32GB V100 but has full active fan cooling which I'm finding works just fine.

      1 reply →