Comment by tiffanyh

19 hours ago

That’s what’s needed when you go from $9B in ARR … to $30B in ARR literally just one quarter later.

That kind of insane growth & demand is unprecedented at that scale.

https://www.anthropic.com/news/google-broadcom-partnership-c...

What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.

  • Where I work:

    - Development velocity is very noticeably much higher across the board. Quality is not obviously worse, but it's LLM assisted, not vibe coding (except for experiments and internal tools).

    - Things that would have been tactically built with TypeScript are now Rust apps.

    - Things that would have been small Python scripts are full web apps and dashboards.

    - Vibe coding (with Claude Desktop, nobody is using Replit or any of the others) is the new Excel for non tech people.

    - Every time someone has any idea it's accompanied by a multi page "Clauded" memo explaining why it's a great idea and what exactly should be done (about 20% of which is useful).

    - 80% of what were web searches now go to Claude instead (for at least a significant minority of people, could easily be over 50%).

    - Nobody talks about ChatGPT any more. It's Claude or (sometimes) Gemini.

    - My main job isn't writing code but I try to keep Claude Code (both my personal and corpo accounts) and OpenCode (also almost always Claude, via Copilot) busy and churning away on something as close to 100% of the time as I can without getting in the way of my other priorities.

    We (~20 people) are probably using 2 orders of magnitude more inference than we were at the start of the year and it's consolidated away from cursor, ChatGPT and Claude to just be almost all Claude (plus a little Gemini as that's part of our Google Whateverspace plan and some people like it, mostly for non-engineering tasks).

    No idea if any of this will make things better, exactly, but I think we'd be at a severe competitive disadvantage if we dropped it all and went back how things were.

    • I am hobbyist playing around. Recently dropped CC (which gave me a sense of awe 2 months ago), but they realized GPUs need CapEx and I want to screw around with pi.dev on a budget. Then on to GH Copilot but couldn't understand their cost structure, ran out of quota half month in, now on Codex. I don't really see any difference for little stuff. I also have Antigravity through a personal Gmail account with access to Opus et al and I don't understand if I am paying for it or not. They don't have my CC so that's a breather.

      It's all romantic, but a bunch of devs are getting canned left and right, a slice of the population whose disposable income the economy depends on.

      It's too late to be a contrarian pundit, but what's been done besides uncovering some 0-days? The correction will be brutal, worse than the Industrial Revolution. Just the recent news about Meta cuts, SalesForce, Snap, Block, the list is long.

      Have you shipped anything commercially viable because of AI or are you/we just keeping up?

      22 replies →

    • > - Development velocity is very noticeably much higher across the board

      It's an absolute tornado of PRs these days. Everyone making the most of these tools is effectively an engineering team lead.

      1 reply →

    • Is your team measuring how much of your code is being written with claude and comparing amongst the team, like what works best in your codebase? How are you learning from each other?

      I’m making a team version of my buildermark.dev open source project and trying to learn about how teams would like to use it.

      2 replies →

    • It sounds very similar to my shop. I have QA people and Product Managers using Claude to develop better integration and reporting tools in Python. Business users are vibe coding all kinds of tools shared as Claude Artifacts, the more ambitious ones are building single page app prototypes. We ported one prototype to Next.js and hosted on Vercel in a couple of days and then handed it back to them with a Devcontainer and Claude Code so they can iterate on it themselves; and we also developed all the security infrastructure, scaffolding, agent instructions & policy required to do this for low stakes apps in a responsible way.

      It hardly seems worth it to try to iterate on design when they can just build a completely functional prototype themselves in a few hours. We're building APIs for internal users in preference to UIs, because they can build the UIs themselves and get exactly what they need for their specific use cases and then share it with whoever wants it.

      We replaced an expensive, proprietary vendor product in a couple of weeks.

      I have no delusions about the scale or complexity limits of these projects. They can help with large, complex systems but mostly at the margins: help with impact analysis, production support, test cases, code review. We generate a lot of code too but we're not vibe coding a new system of record and review standards have actually increased because refactoring is so much cheaper.

      The fact is that ordinary businesses have a LOT of unmet demand for low stakes custom software. The ones that lean into this will not develop superpowers but I do think they will out-compete slow adopters and those companies will be forced to catch up in the next few years.

      I develop presentations now by dumping a bunch of context in a folder with a template and telling Claude Cowork what I want (it does much better than web version because of its python and shell tools and it can iterate, render, review, repeat until its excellent). The copy is quite good, I rewrite less than a third of it and the style and graphics are so much better than I could do myself in many hours.

      No one likes reading a bunch of vibe coded slop and cultural norms about this are still evolving; but on balance its worth it by far.

    • I am an early Gemini daily driver type engineer, feels like Node, Firefox, React and Tailwind all over again, Claude Sonnet is 10x more expensive, quick thought experiment do you think 10 Gemini prompts is needed to match the quality of one Claude Code prompt? The harness around Gemini is an issue but I built my own (in Rust)

    • Personally at my place, there hasn't been a noticable velocity change since the adoption of Claude Code. I'd say it's even slightly worse as now you have junior frontend engineers making nonsense PRs in the backend.

      Mainn blockers are still product, legal, management ... which Claude code didn't help with.

    • This sounds like my office, but we're a bit more tilted toward Codex. I personally use Claude Cowork for drudge-admin work, GPT 5.5-Pro for several big research tasks daily, and the LLMs munge on each other's slop all day as I try my best to wrap my head around what has been produced and get it into our document repository -- all the while being conscious that the enormous volume of stuff I'm producing is a bit overwhelming for everyone.

      We are definitely reaching the point where you need an LLM to deal with the onslaught of LLM-generated content, even if the humans are being judicious about editing everything. We're all just cranking on an inhumanly massive amount of output and it's frankly scary.

      1 reply →

  • I'm burning an insane number of tokens 8-12 hours a day for the dramatic improvement of some internal tooling at a big tech company. Using it heavily for an unannounced future project as well.

    I presume I'm not the only one.

    • We suddenly have a proliferation of new internal tools and resources, nearly all of which are barely functional and largely useless with no discernible impact on the overall business trajectory but sure do seem to help come promo time.

      Barely an hour goes by without a new 4-page document about something that that everyone is apparently ment to read, digest and respond to, despite its 'author' having done none of those steps, it's starting to feel actively adversarial.

      64 replies →

    • AI is truly perfect for internal tooling. Security is less or no concern, bugs are more acceptable, performance / scalability rarely a concern. Quickest way to get things done, and speed up production development, MVP development etc.

      15 replies →

    • I am, oddly, able to get really quite a lot of mileage out of $20/mo of OpenAI plan, and I have never encountered a usage limit. I have gotten warnings that I was close a couple times.

      I wonder what I’m doing differently.

      I did spend quite a bit of time, mostly manually, improving development processes such that the agent could effectively check its work. This made a difference between the agent mostly not working and mostly working. Maybe if I had instead spent gobs of money it would have worked output tooling improvements?

      2 replies →

    • I guess that's one way to tout a technology as revolutionary without actually needing to provide any proof of it. Just say you're using it for "internal tooling" and "unannounced projects", that way nobody can look at them and notice they're indistinguishable from the slop that clogs up Show HN nowadays.

      It's better than the "here's my code, it a giant pile of spaghetti but only luddites care about code quality and maintainability anyway" method, at least.

      1 reply →

  • Exactly. Software quality has become worse, online media has become even more trash than before, and life is otherwise basically the same, lack of jobs notwithstanding. The legitimately useful things regular people can use AI for would be mostly solved by locally run quantized models. This AI "revolution" may be setting several billion on fire without even 1% of that being real value added to the world.

    Coding velocity doesn't matter if it the net result is software that sucks massive schlong. The real world doesn't care if programmers can write code faster.

  • Haven't you seen all the layoffs? Ive been subscribed to r/layoffs for 5+ years, and since a couple of months ago, it's been crazy noisy.

    My hypothesis is that companies dont want to offer cheaper nor better services. Only want to cut costs and keep the revenue for investors.

    I other news, TQQQ is pretty high!

    • Subscribers will not enable these companies to make their money back. The only way is for them to eat the economy itself

    • I'm wondering whether the layoffs are partly targeting people who haven't adapted to using AI tools, particularly those who are openly dismissive of AI-assisted work.

      5 replies →

  • I'm spending a ton of tokens because it insists on manually correcting code that fails the linter, despite the instructions in the AGENTS.md to run the linter with autocorrect.

    And also because the Plan agent generates a huge plan, asks me a couple yes/no questions with an obvious answer, and then regenerates the entire plan again. Then the Build agent gets confused anyway and does something else, and I have to round-trip about 5 times with that full context each time.

  • It's not just code generation, either - more and more people in my own org are using Claude Code for infrastructure automation, devops, etc. Obviously some amount of code in there, but an absolute ton of tokens being consumed just dealing with Kubernetes work at scale.

  • Yes but help is on the way. I have asked my OpenClaw agent to build a new RAM factory.

  • This is my take away too. I see some interesting toys here and there, but not much of substance. Meanwhile all the GitHub issues I follow for open source projects have slowed to a halt, the products I use have no significant updates. Even AI products are slow to improve their interfaces.

  • It's a great tool, and at 1/10 or 1/100th the cost of actual developers. In the context of yc I guess watch out getting re-disrupted by a smaller team faster than before. But that's really the trend the past 40 years so nothing is new. Well maybe the velocity combined with us loosing it's footing at the same time.

    But yea it's not gonna make facebook 20% better tomorrow just that you need 5 people instead of 40 to build the next facebook.

  • Claude is great. I'm never going back. There is no way back.

    I'm at least 5x faster, if not more. With tooling I might be able to get to 10-15x.

  • You seem to be under the impression that making services better or cheaper _for the consumer_ is the goal of any corporation. The goal is to make their own operations better and cheaper for them. They are laying off employees and adding features of questionable value as a pretext to raise prices. The playbook has not changed, it has only accelerated.

  • For myself, its a massive boost when solo developing. Perhaps this is a different use case than most. It can work across multiple programming languages and frameworks that I had zero experience in. I use my existing knowledge of programming to ensure the new code written is correct. Also it really excels at translating from one language/framework to another. I can spend time getting it working well in a platform I know then just ask it to convert to another platform. It gets it 90% right in the first prompt, then its just a matter of fine-tuning, reviewing etc. This last 10% is where I supercharge my learning on those languages/framework. To lean all the new languages and frameworks would have taken me months before I would be productive. Now with a single prompt, we get 90% of the way there. That is incredible value for us.

  • >What is all this AI doing? People are spending 10’s to 100’s of billions and no service or technology seems better or cheaper. Everything is more expensive and worse.

    That "more expensive" is someone's revenue. May be AI is the kind of technology that allows to make more and more revenue by making things more expensive and worse than by making them better and cheaper.

  • I keep seeing this take.

    And yet.. building shit is no longer the sole domain of the software engineer.

    That's the sea change.

    I've literally had finance and GTM stand things up for themselves in the last few weeks. A few tweaks (obviously around security and access), and they are good to go.

    They've gone from wrangling spreadsheets to smooth automated workflows that allow them to work at a higher level in a matter of months.

    That's what all this AI is doing. The shit we could never get the time to get around to doing.

    • So... more 'busy work'.

      The only thing that matters is the impact on the financials. The shareholders (the people who employ you) dont care about any of this if it does not enhance value.

    • Are they doing PRs? Putting their code in git? Is AI deploying it or do they get help with that?

    • Mind sharing what industry you’re seeing this in? I’ve never talked to finance or GTM as an engineer. I’m not sure GTM exists in my industry.

  • I can say in one role in my job, I'm getting a lot of use and I know my colleagues are at least trying a lot of things. One use is a first-pass review of animal care and use protocols. The Claude project was given all of the relevant policies and guidelines as well as a fairly long prompt that explains the things we look for in protocol review. It's checking some things that the software we use makes very tedious to check and raising inconsistencies between sections. Some places have a full time "protocol reader" who does this kind of first check, but we've never had that, so it's helpful.

    Another project I'm seeing in the same realm is taking an approved protocol and some study results and checking that the records of what was done match what they said they could do in the approved protocol. It can also make sure that surgical records have all the things they should have. This can help meet one of the requirements from the national accreditation organization to do "post approval monitoring".

    Another way I've used it is to have it collate and compare a particular kind of policy across many institutions who transparently put their policies online. Seeing the commonality between the policies and where some excel helped me rewrite our policy.

    This is work that just wasn't happening before or, more accurately, it was being spread over lots of people, and any improvement in efficiency or consistency is hard to measure.

Run-rate revenue is not ARR. For all we know they could have a revenue of $100 and claim a run-rate revenue of $30B.

Given the fact that both Altman and Amodei are pathological liars, there's absolutely no reason to believe that Anthropic has $30B ARR.

  • For all we know they could have a revenue of $100 and claim a run-rate revenue of $30B.

    Can you explain how that’d work? What would the $30B figure be based on if they only have $100 in revenue?

    • They're pointing out that run-rate revenue is based on essentially sampling revenue over some limited time interval, then extrapolating from there assuming revenue always occurs at the same rate (or greater) over all similar intervals in the future. More specifically, they're pointing out that estimates of ARR derived from this kind of sampling are fundamentally prone to error and can be arbitrarily inflated based on how the time interval is sampled.

      1 reply →

    • As far as I understand run rate revenue is just a fancy way of saying that "the last month we had sales, and if that continues for a year we will have a AAR of 30B. meaning it's not 30B yet, but the sales numbers indicates that we get there by continue selling at the current speed. But to have revenue of $100 and get $30B in ARR I guess the period looked at needs to be seconds....

      (Run Rate = Revenue in Period / # of Days in Period x 365)

      2 replies →

    • There are about 30 million seconds in a year. If they made $100 over the last hundred milliseconds, then that’s $30B annualized.

      (That said, their numbers are much realer than that.)

    • If you make a hundred dollars in 0.1 seconds, you could say your annualized revenue is $100 / 0.1 * 60 * 60 * 24 * 365 = -$30 billion.

      That said, most people would use a monthly or quarterly period to estimate ARR. I'm not sure what Anthropic used. Probably monthly.

  • the fact!?

    • I don't follow Anthropic closely enough to know what claims its CEO has made, but it is factual that Altman is a pathological liar. You can observe this for yourself by reading and listening to the things he says and then comparing them to reality. We have years of evidence to look back on. The chasm between Altman's reality and everyone else's is so large and so well-known that it was one of the chief factors cited by the board when he was fired.

      (I would then argue that he was re-hired specifically because others involved with OpenAI understood that it is literally his job to lie and that OpenAI would not be where it is today as a corporate behemoth rather than a research non-profit without a world-class liar marketing it, but that is merely conjecture.)

    • I mean.. kinda everything about Mythos for example? Anthropic has a good product, but they also pretty consistently say some stupid ass shit if you're being generous, and blatant lies if you aren't

      1 reply →