← Back to context

Comment by dpe82

14 hours ago

It's wild that Sonnet 4.6 is roughly as capable as Opus 4.5 - at least according to Anthropic's benchmarks. It will be interesting to see if that's the case in real, practical, everyday use. The speed at which this stuff is improving is really remarkable; it feels like the breakneck pace of compute performance improvements of the 1990s.

The most exciting part isn't necessarily the ceiling raising though that's happening, but the floor rising while costs plummet. Getting Opus-level reasoning at Sonnet prices/latency is what actually unlocks agentic workflows. We are effectively getting the same intelligence unit for half the compute every 6-9 months.

  • 2024: Intelligence too cheap to meter

    2026: Everyone is spending $500/month on LLM subscriptions

  • > We are effectively getting the same intelligence unit for half the compute every 6-9 months.

    Something something ... Altman's law? Amodei's law?

    Needs a name.

  • This is what excited me about Sonnet 4.6. I've been running Opus 4.6, and switched over to Sonnet 4.6 today to see if I could notice a difference. So far, I can't detect much if any difference, but it doesn't hit my usage quota as hard.

simonw hasn't shown up yet, so here's my "Generate an SVG of a pelican riding a bicycle"

https://claude.ai/public/artifacts/67c13d9a-3d63-4598-88d0-5...

> Sonnet 4.6 is roughly as capable as Opus 4.5 - at least according to Anthropic's benchmarks

Yeah it's really not. Sonnet still struggles while Opus, even 4.5 succeeds (and some examples show Opus 4.6 is actually even worse than 4.5, all while being more expensive and taking longer to finish).

The system card even says that Sonnet 4.6 is better than Opus 4.6 in some cases: Office tasks and financial analysis.

I sent Opus a photo of NYC at night satellite view and it was describing "blue skies and cliffs/shore line"... mistral did it better, specific use case but yeah. OpenAI was just like "you can't submit a photo by URL". Was going to try Gemini but kept bringing up vertexai. This is with Langchain

Given that users prefered it to Sonnet 4.5 "only" in 70% of the cases (according to their blog post) makes me highly doubt that this is representative of real-life usage. Benchmarks are just completely meaningless.

  • For cases where 4.5 already met the bar, I would expect 50% preference each way. This makes it kind of hard to make any sense of that number, without a bunch more details.

    • Good point. So much functionality gets commoditized, we have to move goalposts more or less constantly.

We see the same with Google's Flash models. It's easier to make a small capable model when you have a large model to start from.

  • Flash models are nowhere near Pro models in daily use. Much higher hallucinations, and easy to get into a death sprawl of failed tool uses and never come out

    You should always take those claim that smaller models are as capable as larger models with a grain of salt.

    • Flash model n is generally a slightly better Pro model (n-1), in other words you get to use the previously premium model as a cheaper/faster version. That has value.

      2 replies →

Why is it wild that a LLM is as capable as a previously released LLM?

  • Opus is supposed to be the expensive-but-quality one, while Sonnet is the cheaper one.

    So if you don't want to pay the significant premium for Opus, it seems like you can just wait a few weeks till Sonnet catches up

    • Strangely enough, my first test with Sonnet 4.6 via the API for a relatively simple request was more expensive ($0.11) than my average request to Opus 4.6 (~$0.07), because it used way more tokens than what I would consider necessary for the prompt.

      2 replies →

    • Okay, thanks. Hard to keep all these names apart.

      I'm even surprised people pay more money for some models than others.

  • Because Opus 4.5 was released like a month ago and state of the art, and now the significantly faster and cheaper version is already comparable.