Comment by changbai
1 month ago
Inference cost for leading models and more complex tasks is high. However, inference cost for a stationary model and task has dropped drastically.
https://a16z.com/llmflation-llm-inference-cost/ for example shows this to be true.
The report from OpenRouter https://openrouter.ai/state-of-ai also makes the same observation.
No comments yet
Contribute on Hacker News ↗