Comment by sparacha
2 days ago
There is liteLLM, OpenRouter, Arch (although that’s an edge/service proxy for agents) and now this. We all need a new problem to solve
2 days ago
There is liteLLM, OpenRouter, Arch (although that’s an edge/service proxy for agents) and now this. We all need a new problem to solve
LiteLLM is kind of a mess TBH, I guess it's ok if you just want a docker container to proxy to for personal projects, but actually using it in production isn't great.
I definitely appreciate all the work that has gone in to LiteLLM but it doesn't take much browsing through the 7000+ line `utils.py` to see where using it could become problematic (https://github.com/BerriAI/litellm/blob/main/litellm/utils.p...)
can you double click a little bit? many files in professional repos are 1000s of lines. LoC in it self is not a code smell.
5 replies →
> but actually using it in production isn't great.
I only use it in development. Could you elaborate on why you don't recommend using it in production?
the people behind envoy proxy built: https://github.com/katanemo/archgw - has the learnings of Envoy but natively designed to process/route prompts to agents and LLMs. Would be curious about your thoughts
And all of them despite 80% of model providers offering an OpenAI compatible endpoint
I think Mozilla of all people would understand why standardizing on one private organization's way of doing things might not be best for the overall ecosystem. Building a tool that meets LLM providers where they are instead of relying on them to homogenize on OpenAI's choices seems like a great reason for this project.
portkey as well which is both js and open source https://www.latent.space/p/gateway
why provide link if there is not a single portkey keyword there?
its my interview w portkey folks which has more thoughts on the category
we are trying to apply model-routing to academic work and pdf chat with ubik.studio -- def lmk what you think