← Back to context

Comment by iamjackg

17 hours ago

It's not unsolved, at least not the first part of your question. In fact it is a feature offered by all main LLM providers!

- https://platform.openai.com/docs/guides/prompt-caching

- https://platform.claude.com/docs/en/build-with-claude/prompt...

- https://ai.google.dev/gemini-api/docs/caching

Ah, that's good to know, thanks.

But then why is there compounding token usage in the article's trivial solution? Is it just a matter of using the cache correctly?

dumb question, but is prompt caching available to Claude Code … ?

  • If you're using the API, yes. If you have a subscription, you don't care, as you aren't billed per prompt (you just have a limit).