Comment by themanmaran
18 days ago
That was the cost when we ran Llama 90b using TogetherAI. But it's quite hard to standardize, since it depends a lot on who is hosting the model (i.e. together, openrouter, grok, etc.)
I think in order to run a proper cost comparison, we would need to run each model on an AWS gpu instance and compare the runtime required.
No comments yet
Contribute on Hacker News ↗