Comment by mk89

1 day ago

I agree with this comment here.

For me the main BIG deal is that cloud models have online search embedded etc, while this one doesn't.

However, if you don't need that (e.g., translate, summarize text, writing code) probably is good enough.

So long as the local model supports tool-use, I haven't had issues with them using web search etc in open-webui. Frontier models will just be smarter in knowing when to use tools.

> For me the main BIG deal is that cloud models have online search embedded etc, while this one doesn't.

Models do not have online search embedded, they have tool use capabilities (possibly with specialized training for a web search tool), but that's true of many open and weights-available models, and they are run with harnesses that support tools and provide a web search tool (lmstudio is such a harness, and can easily be supplied with a web search tool.)

you can do web searches in lm studio. just connect an mcp that does it. Serpapi has an mcp, for example

  • Also, I had several experiments where I was interested in just 5 to 10 websites with application specific information so it works nicely for fast dev to spider, keep a local index, then get very low search latency. Obviously this is not a general solution but is nice for some use cases.