Comment by cmrdporcupine
11 hours ago
Pricing: https://api-docs.deepseek.com/quick_start/pricing
"Pro" $3.48 / 1M output tokens vs $4.40 for GLM 5.1 or $4.00 for Kimi K2.6
"Flash" is only $0.28 / 1M and seems quite competent
(EDIT: Note that if you hit the setting that opencode etc hit (deepseek-chat / deepseek-reasoner) for DeepSeek API, it appears to be "flash".)
I estimated that even with heavy usage it would cost your around 30-70$ depending on caching at around 40M tokens. That would give you around double the usage compared to gpt-5.5 on the 200$ sub
This is refreshing right after GPT-5.5's $30