Comment by dpe82
14 hours ago
It's wild that Sonnet 4.6 is roughly as capable as Opus 4.5 - at least according to Anthropic's benchmarks. It will be interesting to see if that's the case in real, practical, everyday use. The speed at which this stuff is improving is really remarkable; it feels like the breakneck pace of compute performance improvements of the 1990s.
The most exciting part isn't necessarily the ceiling raising though that's happening, but the floor rising while costs plummet. Getting Opus-level reasoning at Sonnet prices/latency is what actually unlocks agentic workflows. We are effectively getting the same intelligence unit for half the compute every 6-9 months.
2024: Intelligence too cheap to meter
2026: Everyone is spending $500/month on LLM subscriptions
> We are effectively getting the same intelligence unit for half the compute every 6-9 months.
Something something ... Altman's law? Amodei's law?
Needs a name.
How about More's law - because we keep getting "more" compute at a lower cost?
Moore's law lives on!
This is what excited me about Sonnet 4.6. I've been running Opus 4.6, and switched over to Sonnet 4.6 today to see if I could notice a difference. So far, I can't detect much if any difference, but it doesn't hit my usage quota as hard.
> The speed at which this stuff is improving is really remarkable; it feels like the breakneck pace of compute performance improvements of the 1990s.
Yeah, but RAM prices are also back to 1990s levels.
Relief for you is available: https://computeradsfromthepast.substack.com/p/connectix-ram-...
You wouldn't download a RAM
2 replies →
I knew I've been keeping all my old ram sticks for a reason!
simonw hasn't shown up yet, so here's my "Generate an SVG of a pelican riding a bicycle"
https://claude.ai/public/artifacts/67c13d9a-3d63-4598-88d0-5...
We finally have AI safety solved! Look at that helmet
"Look ma, no wings!"
:D
For comparisonI think the current leader in pelican drawing is Gemini 3 Deep Think:
https://bsky.app/profile/simonwillison.net/post/3meolxx5s722...
My take (also Gemini 3 Deep Think): https://gemini.google.com/share/12e672dd39b7
Somehow it's much better now.
3 replies →
if they want to prove the model's performance the bike clearly needs aero bars
Can’t beat Gemini’s which was basically perfect.
> Sonnet 4.6 is roughly as capable as Opus 4.5 - at least according to Anthropic's benchmarks
Yeah it's really not. Sonnet still struggles while Opus, even 4.5 succeeds (and some examples show Opus 4.6 is actually even worse than 4.5, all while being more expensive and taking longer to finish).
The system card even says that Sonnet 4.6 is better than Opus 4.6 in some cases: Office tasks and financial analysis.
I sent Opus a photo of NYC at night satellite view and it was describing "blue skies and cliffs/shore line"... mistral did it better, specific use case but yeah. OpenAI was just like "you can't submit a photo by URL". Was going to try Gemini but kept bringing up vertexai. This is with Langchain
Given that users prefered it to Sonnet 4.5 "only" in 70% of the cases (according to their blog post) makes me highly doubt that this is representative of real-life usage. Benchmarks are just completely meaningless.
For cases where 4.5 already met the bar, I would expect 50% preference each way. This makes it kind of hard to make any sense of that number, without a bunch more details.
Good point. So much functionality gets commoditized, we have to move goalposts more or less constantly.
We see the same with Google's Flash models. It's easier to make a small capable model when you have a large model to start from.
Flash models are nowhere near Pro models in daily use. Much higher hallucinations, and easy to get into a death sprawl of failed tool uses and never come out
You should always take those claim that smaller models are as capable as larger models with a grain of salt.
Flash model n is generally a slightly better Pro model (n-1), in other words you get to use the previously premium model as a cheaper/faster version. That has value.
2 replies →
Why is it wild that a LLM is as capable as a previously released LLM?
Opus is supposed to be the expensive-but-quality one, while Sonnet is the cheaper one.
So if you don't want to pay the significant premium for Opus, it seems like you can just wait a few weeks till Sonnet catches up
Strangely enough, my first test with Sonnet 4.6 via the API for a relatively simple request was more expensive ($0.11) than my average request to Opus 4.6 (~$0.07), because it used way more tokens than what I would consider necessary for the prompt.
2 replies →
Okay, thanks. Hard to keep all these names apart.
I'm even surprised people pay more money for some models than others.
Because Opus 4.5 was released like a month ago and state of the art, and now the significantly faster and cheaper version is already comparable.
"Faster" is also a good point. I'm using different models via GitHub copilot and find the better, more accurate models way to slow.
Opus 4.5 was November, but your point stands.
1 reply →
It means price has decreased by 3 times in a few months.
Because Opus 4.5 inference is/was more expensive.