Comment by 33MHz-i486
17 hours ago
I think the subtext of the last few weeks is the Anthropic was becoming severely capacity constrained (or approaching that). They seem to have had to sign two somewhat adverse contracts with Amazon and Google in short succession. suddenly model quality is back up again.
That’s what’s needed when you go from $9B in ARR … to $30B in ARR literally just one quarter later.
That kind of insane growth & demand is unprecedented at that scale.
https://www.anthropic.com/news/google-broadcom-partnership-c...
What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.
Where I work:
- Development velocity is very noticeably much higher across the board. Quality is not obviously worse, but it's LLM assisted, not vibe coding (except for experiments and internal tools).
- Things that would have been tactically built with TypeScript are now Rust apps.
- Things that would have been small Python scripts are full web apps and dashboards.
- Vibe coding (with Claude Desktop, nobody is using Replit or any of the others) is the new Excel for non tech people.
- Every time someone has any idea it's accompanied by a multi page "Clauded" memo explaining why it's a great idea and what exactly should be done (about 20% of which is useful).
- 80% of what were web searches now go to Claude instead (for at least a significant minority of people, could easily be over 50%).
- Nobody talks about ChatGPT any more. It's Claude or (sometimes) Gemini.
- My main job isn't writing code but I try to keep Claude Code (both my personal and corpo accounts) and OpenCode (also almost always Claude, via Copilot) busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities.
We (~20 people) are probably using 2 orders of magnitude more inference than we were at the start of the year and it's consolidated away from cursor, ChatGPT and Claude to just be almost all Claude (plus a little Gemini as that's part of our Google Whateverspace plan and some people like it, mostly for non-engineering tasks).
No idea if any of this will make things better, exactly, but I think we'd be at a severe competitive disadvantage if we dropped it all and went back how things were.
41 replies →
I'm burning an insane number of tokens 8-12 hours a day for the dramatic improvement of some internal tooling at a big tech company. Using it heavily for an unannounced future project as well.
I presume I'm not the only one.
98 replies →
Exactly. Software quality has become worse, online media has become even more trash than before, and life is otherwise basically the same, lack of jobs notwithstanding. The legitimately useful things regular people can use AI for would be mostly solved by locally run quantized models. This AI "revolution" may be setting several billion on fire without even 1% of that being real value added to the world.
Coding velocity doesn't matter if it the net result is software that sucks massive schlong. The real world doesn't care if programmers can write code faster.
Yes but help is on the way. I have asked my OpenClaw agent to build a new RAM factory.
Haven't you seen all the layoffs? Ive been subscribed to r/layoffs for 5+ years, and since a couple of months ago, it's been crazy noisy.
My hypothesis is that companies dont want to offer cheaper nor better services. Only want to cut costs and keep the revenue for investors.
I other news, TQQQ is pretty high!
7 replies →
I'm spending a ton of tokens because it insists on manually correcting code that fails the linter, despite the instructions in the AGENTS.md to run the linter with autocorrect.
And also because the Plan agent generates a huge plan, asks me a couple yes/no questions with an obvious answer, and then regenerates the entire plan again. Then the Build agent gets confused anyway and does something else, and I have to round-trip about 5 times with that full context each time.
It's not just code generation, either - more and more people in my own org are using Claude Code for infrastructure automation, devops, etc. Obviously some amount of code in there, but an absolute ton of tokens being consumed just dealing with Kubernetes work at scale.
It's a great tool, and at 1/10 or 1/100th the cost of actual developers. In the context of yc I guess watch out getting re-disrupted by a smaller team faster than before. But that's really the trend the past 40 years so nothing is new. Well maybe the velocity combined with us loosing it's footing at the same time.
But yea it's not gonna make facebook 20% better tomorrow just that you need 5 people instead of 40 to build the next facebook.
1 reply →
Claude is great. I'm never going back. There is no way back.
I'm at least 5x faster, if not more. With tooling I might be able to get to 10-15x.
You seem to be under the impression that making services better or cheaper _for the consumer_ is the goal of any corporation. The goal is to make their own operations better and cheaper for them. They are laying off employees and adding features of questionable value as a pretext to raise prices. The playbook has not changed, it has only accelerated.
For myself, its a massive boost when solo developing. Perhaps this is a different use case than most. It can work across multiple programming languages and frameworks that I had zero experience in. I use my existing knowledge of programming to ensure the new code written is correct. Also it really excels at translating from one language/framework to another. I can spend time getting it working well in a platform I know then just ask it to convert to another platform. It gets it 90% right in the first prompt, then its just a matter of fine-tuning, reviewing etc. This last 10% is where I supercharge my learning on those languages/framework. To lean all the new languages and frameworks would have taken me months before I would be productive. Now with a single prompt, we get 90% of the way there. That is incredible value for us.
>What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.
That "more expensive" is someone's revenue. May be AI is the kind of technology that allows to make more and more revenue by making things more expensive and worse than by making them better and cheaper.
I keep seeing this take.
And yet.. building shit is no longer the sole domain of the software engineer.
That's the sea change.
I've literally had finance and GTM stand things up for themselves in the last few weeks. A few tweaks (obviously around security and access), and they are good to go.
They've gone from wrangling spreadsheets to smooth automated workflows that allow them to work at a higher level in a matter of months.
That's what all this AI is doing. The shit we could never get the time to get around to doing.
3 replies →
[dead]
[dead]
I can say in one role in my job, I'm getting a lot of use and I know my colleagues are at least trying a lot of things. One use is a first-pass review of animal care and use protocols. The Claude project was given all of the relevant policies and guidelines as well as a fairly long prompt that explains the things we look for in protocol review. It's checking some things that the software we use makes very tedious to check and raising inconsistencies between sections. Some places have a full time "protocol reader" who does this kind of first check, but we've never had that, so it's helpful.
Another project I'm seeing in the same realm is taking an approved protocol and some study results and checking that the records of what was done match what they said they could do in the approved protocol. It can also make sure that surgical records have all the things they should have. This can help meet one of the requirements from the national accreditation organization to do "post approval monitoring".
Another way I've used it is to have it collate and compare a particular kind of policy across many institutions who transparently put their policies online. Seeing the commonality between the policies and where some excel helped me rewrite our policy.
This is work that just wasn't happening before or, more accurately, it was being spread over lots of people, and any improvement in efficiency or consistency is hard to measure.
Run-rate revenue is not ARR. For all we know they could have a revenue of $100 and claim a run-rate revenue of $30B.
Given the fact that both Altman and Amodei are pathological liars, there's absolutely no reason to believe that Anthropic has $30B ARR.
For all we know they could have a revenue of $100 and claim a run-rate revenue of $30B.
Can you explain how that’d work? What would the $30B figure be based on if they only have $100 in revenue?
9 replies →
the fact!?
3 replies →
> suddenly model quality is back up again.
I agree about the core motivation behind these deals, however I'm skeptical as to how "suddenly" we'll see substantial improvements. Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
They're already over-subscribed and waiting for new data centers (and power plants) to come online. I suspect Anthropic will get a modest amount of new capacity right away with more added over coming quarters. These two deals don't change the total amount of AI compute available on planet Earth over the next 18 months. Anthropic parting with high-value equity has now made them the new highest bidder for an already over-bid resource. I suspect the net impact will be Amazon & Google pushing prices even higher on everyone else as they reallocate compute to their new top whale.
> Despite their size, I'd be surprised if Google or Amazon had uncommitted chunks of Anthropic-scale, top-tier AI compute sitting around waiting to be activated.
I doubt it was idle capacity. But for a chunk of equity in Anthropic I imagine they are willing to deprioritize other, possibly internal, uses. Certainly anything that's not contractually obligated could be on the chopping block.
When people are moving from other models (e.g. GPT, Gemini, etc) the compute that is previously powering that inference now becomes available. Of course, I'm certainly doubtful that Google would break commits and give OpenAI GPUs to Anthropic, but the underlying effect is present and probably sorted out somehow. It's not completely net new compute for the world.
It takes me 6 minutes minimum to get a response in the last 3 days, I don’t think model capacity is better.
It seems like they shifted heavily to prioritizing enterprise users. Starting in the last day or two I started getting much faster responses on an enterprise plan.
Perhaps the adversity of the contracts cancels out with their sudden success and increase in valuation and it ends up a wash compared to the counterfactual scenario where they would have speculated on high growth early on.
They should probably look at moving away from general purpose hardware for their actual products, and reserve GP hardware for RnD. You don't need frontier nodes to run circles around GPGPUs, an ASIC made with 28nm is more than enough to embarrass an H100 (and much cheaper)
AI is in such desperate need to adopt software-hardware co-development practices, it's infuriating watching the industry drag its feet about it. We are wasting so much electricity and absolutely wrecking the "free" market just because these companies are incentivized to work at an unsustainable breakneck speed in getting shit to market.
another llm user exhibiting the behavior of a gambling addict "That table is cold this week" "that machine is't hot, you wont win on it" Every month llm users are coming up with reasons why they think the model is under preforming from some occulted reasoning only they perceive. Maybe you're just frying your brain by using llms the way you do.
> suddenly model quality is back up again
Is that not down to this? https://www.anthropic.com/engineering/april-23-postmortem
You really think that for companies of this size, signing a contract would immediately reflect in you as end user noticing improved model quality?
Well to a certain extent it also blunts competition, Gemini is less of a threat if their main investor is also backing Anthropic. The issue is when the pyramid scheme collapses...
Both Amazon and Google provide the Claude models via their Kiro and Antigravity IDEs respectively. It could also be investing in their attempt to own the IDE space.
[dead]