← Back to context

Comment by ekianjo

8 days ago

lM studio is closed source

Any FOSS solutions that let you browse models and guesstimates for you on whether you have enough VRAM to fully load the model? That's the only selling point to LM Studio for me.

Ollama's default context length is frustratingly short in the era of 100k+ context windows.

My solution so far has been to boot up LM Studio to check if a model will work well on my machine, manually download the model myself through huggingface, run llama.cpp, and hook it up to open-webui. Which is less than ideal, and LM Studio's proprietary code has access to my machine specs.