← Back to context

Comment by vanviegen

15 hours ago

On GitHub copilot you pay per prompt. More powerful models can do a lot more work (consuming a lot more tokens) per prompt. Also, they tend to use more thinking tokens.

> More powerful models can do a lot more work (consuming a lot more tokens) per prompt.

That is not my experience. Each model since at least GPT-4 can fill up an entire context window. In fact, more powerful models can solve tasks faster, so their ratio of multiplier to API price should decrease, not increase.

For example, Claude Sonnet 4.6 has a multiplier of 9 and an API price of $15, which is 0.6 multiplier per dollar.

Claude Opus 4.7 has an API price of $25, so it should have a multiplier of 25 * 0.6 = 15 when extrapolating from Sonnet, but the multiplier is 27.

> Also, they tend to use more thinking tokens.

That might be it. Is there any data on this somewhere?