← Back to context

Comment by punkpeye

5 months ago

Indeed we do https://glama.ai/models/deepseek-r1

It is provided by DeepSeek and Avian.

I am also midway of enabling a third-provider (Nebius).

You can see all models/providers over at https://glama.ai/models

As another commenter in this tread said, we are just a 'frontend wrapper' around other people services. Therefore, it is not particularly difficult to add models that are already supported by other providers.

The benefit of using our wrapper is that you can use a single API key and you get one bill for all your AI bills, you don't need to hack together your own logic for routing requests between different providers, failovers, keeping track of their costs, worry what happens if a provider goes down, etc.

The market at the moment is hugely fragmented, with many providers unstable, constantly shifting prices, etc. The benefit of a router is that you don't need to worry about those things.

Yeah I am aware. I use open router at the moment but I find it lacks a good UX.

  • Open router is great.

    They have a very solid infrastructure.

    Scaling infrastructure to handle billions of tokens is no joke.

    I believe they are approaching 1 trillion tokens per week.

    Glama is way smaller. We only recently crossed 10bn tokens per day.

    However, I have invested a lot more into UX/UI of that chat itself, i.e. while OpenRouter is entirely focused on API gateway (which is working for them), I am going for a hybrid approach.

    The market is big enough for both projects to co-exist.