Comment by simonw
5 months ago
I got this working with my LLM tool (new plugin version: llm-anthropic 0.14) and figured out a bunch of things about the model in the process. My detailed notes are here: https://simonwillison.net/2025/Feb/25/llm-anthropic-014/
One of the most exciting new capabilities is that this model has a 120,000 token output limit - up from just 8,000 for the previous Claude 3.5 Sonnet model and way higher than any other model in the space.
It seems to be able to use that output limit effectively. Here's my longest result so far, though it did take 27 minutes to finish! https://gist.github.com/simonw/854474b050b630144beebf06ec4a2...
No shade against Sonnet 3.7, but I don't think it's accurate to say way higher than any other model in the space. o1 and o3-mini go up to 100,000 output tokens.
https://platform.openai.com/docs/models#o1
Huh, good call thanks - I've updated my post with a correction.
Simon, do you write anywhere how do you manage to be so... active? Between your programming tools, blogging, job (I assume you work?) where do you find the time/energy?
The trick is not to have an employer: I'm "freelance" aka working full time on my open source projects and burning down my personal runway from a startup acquisition. At some point I need to start making proper money again.
How much did it cost?
$1.8
I have a very long request to do like this, did you use a specific CLI tool ? (Thank you in advance)
2 replies →