← Back to context

Comment by syntaxing

1 day ago

I’m really excited for lmster and to try it out. It’s essentially what I want from ollama. Ollama has deviated so much from their original core principles. Ollama has been broken and slow to update model support. There’s this “vendor sync” I’ve been waiting (essentially update ggml) for weeks.

LMStudio is great but its still not open source. I wish something better than Ollama can be created honestly similar to LMStudio (atleast its new CLI Part from what I can tell) and create an open source alternative.

I think I am fairly technical but I still prefer how Ollama is simple but I know all the complaints about Ollama and I am really just wishing for a better alternative for the most part.

Maybe just a direct layer on top of vllm or llama.cpp itself?

  • > Maybe just a direct layer on top of vllm

    My dream would be something like vLLM, but without all the Python mess, packaged as a single binary that has both HTTP server + desktop GUI, and can browse/download models. Llama.cpp is like 70% there, but large performance difference between llama.cpp and vLLM for the models I use.

    • > My dream would be something like vLLM, but without all the Python mess, packaged as a single binary that has both HTTP server + desktop GUI, and can browse/download models. Llama.cpp is like 70% there, but large performance difference between llama.cpp and vLLM for the models I use.

      To be honest, I was seeing your comment multiple times and after 6 hours, It suddenly clicked about something new.

      I had seen this project on reddit once, https://github.com/GeeeekExplorer/nano-vllm

      It's almost as fast (from what I can tell in its readme, faster?) than vllm itself but unfortunately its written in python too.

      But the good news is that its much smaller in the whole size of the codebase. Let me paste somethings from its readme

           Fast offline inference - Comparable inference speeds to vLLM
           Readable codebase - Clean implementation in ~ 1,200 lines of Python code
           Optimization Suite - Prefix caching, Tensor Parallelism, Torch compilation, CUDA graph, etc.
      
      

      Inference Engine Output Tokens Time (s) Throughput (tokens/s) vLLM 133,966 98.37 1361.84 Nano-vLLM 133,966 93.41 1434.13

      So I guess I am pretty sure that you can one-agent-one-human it from python to rust/golang! It can be an open project.

      Also speaking of oaoh (as I have started calling it), a bit offtopic but my golang port faces multiple issues as I tried today to make it work. I do feel like rust was a good lang because quite frankly the AI agent or anything instead of wanting to do things with its own hands, really wants to end up wanting/wishing to use Fyne library & the best success I had around going against Fyne was in kimi's computer use where you can say that I got a very very (like only simple text) nothing else png file-esque thing working

      If you are interesting emsh. I am quite frankly interested that given that your oaoh project is really high quality. Does it still require the intervention of human itself or can an AI port it itself. Because I have mixed feelings about it.

      Honestly It's an open challenge to everybody. I am just really interested in getting to learn something about how LLM's work and some lesson from this whole thing I guess imo.

      Still trying to create the golang port as we speak haha xD.

What was the original core principle of ollama?

I had used oobabooga back in the day and found ollama unnecessary.

  • > What was the original core principle of ollama?

    One decision that was/is very integral to their architecture is trying to copy how Docker handled registries and storage of blobs. Docker images have layers, so the registry could store one layer that is reused across multiple images, as one example.

    Ollama did this too, but I'm unsure of why. I know the author used to work at Docker, but almost no data from weights can be shared in that way, so instead of just storing "$model-name.safetensor/.gguf" on disk, Ollama splits it up into blobs, has it's own index, and so on. For seemingly no gain except making it impossible to share weights between multiple applications.

    I guess business-wise, it was easier for them to now make people use their "cloud models" so they earn money, because it's just another registry the local client connects to. But also means Ollama isn't just about running local models anymore, because that doesn't make them money, so all their focus now is on their cloud instead.

    At least as a LM Studio, llama.cpp and vLLM user, I can have one directory with weights shared between all of them (granted the format of the weight works in all of them), and if I want to use Ollama, it of course can't use that same directory and will by default store things it's own way.

    • I was looking into what local inference software to use and also found this behavior with models to be onerous.

      What I want is to have a directory with models and bind mount that readonly into inference containers. But Ollama would force me to either prime the pump by importing with Modelfiles (where do I even get these?) every time I start the container, or store their specific version of files?

      I had trying out vLLM and llama.cpp as my next step in this, I'm glad to hear you are able to share a directory between them.

      1 reply →

  • Ollama vs. llama.cpp is like Docker vs. FreeBSD Jails, Dropbox vs. rsync, jujutsu vs git, etc

  • >What was the original core principle of ollama?

    Nothing, it was always going to be a rug pull. They leached off llama.cpp.

    • Everyone seems to be missing important piece here. Ollama is/was a one click solution for non technical person to launch a local model. It doesn’t need a lot of configuration, detects Nvidia GPU and starts model inferencing with single command. Core principle being your grandmother should be able to launch local AI model without needing to install 100 dependencies.

      3 replies →