Comment by mbesto
10 days ago
I mean it literally says on the page:
"Shown are the sum of prompt and completion tokens per model, normalized using the GPT-4 tokenizer."
Also, it ranks the use of Llama that is provided by cloud providers (for example, AWS Lamda).
I get that OpenRouter is imperfect but its a good proxy to objectively make a claim that an LLM is "the weakest ever"
No comments yet
Contribute on Hacker News ↗