← Back to context

Comment by daemonologist

9 days ago

You mention that you measured cost and latency in addition to accuracy - would you be willing to share those results as well? (I understand that for these open models they would vary between providers, but it would be useful to have an approximate baseline.)

Yes, I'll add that to the writeup! You're right, initially excluded it because it was really dependent on the providers, so lots of variance. Especially with the Qwen models.

High level results were:

- Qwen 32b => $0.33/1000 pages => 53s/page

- Qwen 72b => $0.71/1000 pages => 51s/page

- Llama 90b => $8.50/1000 pages => 44s/page

- Llama 11b => $0.21/1000 pages => 08s/page

- Gemma 27b => $0.25/1000 pages => 22s/page

- Mistral => $1.00/1000 pages => 03s/page

  • One of these things is not like the others. $8.50/1000?? Any chance that's a typo? Otherwise, for someone that has no experience with LLM pricing models, why is Llama 90b so expensive?

    • It's not uncommon when using brokers to see outliers like this. What happens basically is that some models are very popular and have many different providers, and are priced "close to the metal" since the routing will normally pick the cheapest option with the specified requirements (like context size). But then other models - typically more specialized ones - are only hosted by a single provider, and said provider can then price it much higher than raw compute cost.

      E.g. if you look at https://openrouter.ai/models?order=pricing-high-to-low, you'll see that there are some 7B and 8B models that are more expensive than Claude Sonnet 3.7.

      1 reply →

    • That was the cost when we ran Llama 90b using TogetherAI. But it's quite hard to standardize, since it depends a lot on who is hosting the model (i.e. together, openrouter, grok, etc.)

      I think in order to run a proper cost comparison, we would need to run each model on an AWS gpu instance and compare the runtime required.