← Back to context

Comment by AntonyGarand

11 days ago

I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.

Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.

Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.

Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.

It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet.

It's interesting how quickly people buy the "abuse" line of thinking. We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription. That is independent of which agent/harness is used. The fair/real price for profitable use is the pay per use token pricing.

These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.

At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.

Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.

  • It's a big leap to go from "some users may be using large quantities of tokens" to "the labs are burning money on subs in an attempt to kill the competition."

    Lots of businesses have subscription programs in which a small number of users are money losers, but which in aggregate make money.

    It's not even obvious that the labs are losing a lot of money on even a minority of users; the rate use caps are fairly aggressive for Anthropic, and a cursory analysis of likely actual cost of serving tokens shows they are high margin products at the API level and unlikely to be unprofitable within the usage constraints provided to subscribers.

    I do think subscription models make commercial sense because users want predictable costs, and it's a club good in which marginal token cost is zero which helps consolidate their customers' purchasing volume to one provider. But that's a different claim than them serving it unprofitably to kill competition.

    Also, they (Anthropic) are transitioning many of their enterprise customers to API consumption billing anyway.

    • I work in the video AI world.

      We gave up on subscriptions long ago. They're rinky dink and get you a paltry amount of utilization before they run out.

      The per day per seat costs can exceed $1000. This is already normal for studios, and it's already producing positive ROI.

      There's simply no way to price video any other way than by usage. I suspect the same will come for everything.

      6 replies →

  • > Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.

    I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.

    • only if said galactic superintelligence takes immediate steps to kill all its potential competitors, or hoover up all the world's resources, or some other aggressively zero sum thing. otherwise I don't see what difference it makes down the line of you have the second superintelligence rather than the first.

      and that's under the assumption that you can create a superintelligence that will continue to slavishly serve your agenda rather than establishing and following its own goals.

      21 replies →

    • I don't think this race to superintelligence idea should be taken too seriously. It is great for headlines and get peoples imaginations up. It is mostly a marketing gag.

      I look at superintelligence this way: software engineering used to be considered amoung the most mentally demanding jobs one can have. And in this field more and more people give up large parts of their job and become approximately product managers to let the machine do the engineering part. So we are about there. Who cares that there are some puzzles in some "synthetic" benchmark in which humans outsmart AIs?

      1 reply →

    • One thing I don’t understand about this viewpoint (which I understand isn’t your own): why does one benefit so tremendously from getting there a month before competitors? I’m sure having a month of superintelligence with no competition would be lucrative, but do they think achieving superintelligence first will impede competitors from also achieving it a month later?

      11 replies →

  • > We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.

    I dont think this is "understood" or "known" to anyone except Ed Zitron. Subscription plans like Claude Code also have rolling usage limits, it could be profitable. Inference is very cheap and unless you're using OpenClaw no one is actually maxing out the usage window at all times. I'm sure in aggregate the subs are not money furnaces.

    • Then explain why they started banning all third party harnesses, including those that work through Claude Code, if it still makes them money. They are cutting off profit for no good reason?

      I think there were reasons to doubt that heavy subscription users are unprofitable before they did that. OpenClaw was just the tip of the iceberg.

      Why don't they make token pricing dynamic if that was the case? It should then allow heavy user to get even more for their money than with the current subscription model where they can't adjust to current infra availability.

      It may be that "in aggregate" sub users are (not yet) a loosing business. But in all fairness, the more useful AI gets, the more it will be used. And the more it will be used, the harder it will be to make subs cheaper than token pricing. The only counter-weight are new light users, but those will also become heavy users over time, the more useful it will be for them. And at some point it will be hard to onboard light users in the first place, because the laggards will require even more intelligence and value to get them over.

      1 reply →

  • > We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.

    "profit" is a weird concept in the software business. it might be true that there is an opportunity cost to these users, either because they displace other potential users by using up capacity, or because they would be willing to pay more if forced. but I don't believe that anyone is losing money on inference costs on any of their plans.

    > At some point they have to price their product fairly

    they are competing in a market. if most of their costs were inference then this would be a good thing, because everyone would have roughly the same prices, so as long as they had the best model they would win. in fact model development costs eclipse the cost of inference, and is something that non frontier labs get for much cheaper by distilling from the frontier companies.

    > They will have to compete on merit alone, and that is much less profitable.

    that's not really true. google won search on merit alone, and were massively successful as a result. the trick is that everyone from the poorest shmuck to the richest businessman uses google, so they win through scale. in ai, google and openai are making a bet that they can do the same thing. there's only really room for one winner at this game, even two is stretching it, so anthropic has to win by being the smartest model that only high end businesses use. that's a very risky bet.

  • > Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.

    As of May 2026, how much money do I need to spend to buy hardware to have a local model that is 80% as good as SOTA services for assisting me in writing code?

    As for that 80%, how many minutes per LOC will I be waiting, and how many attempts per query will I be wasting while I wait for it to come up with something sensible?

    • > As of May 2026, how much money do I need to spend to buy hardware to have a local model that is 80% as good as SOTA services for assisting me in writing code?

      https://llm-stats.com/benchmarks/swe-bench-verified

      SOTA (public proprietary models) would be Opus 4.7 at 0.876

      80% of that would be around 0.7.

      These models qualify, and are upwards of 90% as good in benchmarks:

        DeepSeek-V4-Pro-Max - 1.6T (HuggingFace shows 862B, huh) - 0.806
        Kimi K2.6 - 1.1T - 0.802
        MiniMax M2.5 - 229B - 0.802
        DeepSeek-V4-Flash-Max - 284B (HuggingFace shows 158B as well) - 0.790
      

      These are 80-90% as good, which is also where you see the smaller ones:

        GLM-5 - 754B - 0.778
        Qwen3.6-27B - 27B - 0.772
        Kimi K2.5 - 1.1T - 0.768
        Qwen3.5-397B-A17B - 397B - 0.764
        Step-3.5-Flash - 199B - 0.744
        GLM-4.7 - 358B - 0.738
        MiMo-V2-Flash - 310B - 0.734
        Qwen3.6-35B-A3B - 35B - 0.734
        DeepSeek-V3.2 - 685B - 0.731
        DeepSeek-V3.2-Speciale - 685B - 0.731
        DeepSeek-V3.2 (Thinking) - 685B - 0.731
        Qwen3.5-27B - 27B - 0.724
        Qwen3.5-122B-A10B - 125B - 0.720
        Kimi K2-Thinking-0905 - 1T - 0.713
        LongCat-Flash-Thinking-2601 - 562B - 0.700
      

      Out of those, the most modest one you could get is Qwen3.6-35B-A3B because the MoE nature makes it faster across more varied hardware.

      I currently run the Unsloth 8bit quants on-prem (on a bunch of Nvidia L4 GPUs, since low TDP, long story), some people swear by more quantized versions but with the small models the impact is felt more: https://huggingface.co/unsloth/Qwen3.6-35B-A3B-GGUF

      So essentially you need up to 39 GB for the model itself and then some for the KV cache and whatever context size you want. Ideally I'd aim for 64 GB of memory for that, though if really pressed for resources, could get a heavily quantized version within 32 GB (but very little memory for context and kinda shit).

      Personally, I think that you need about 45-60 tokens/second for decent usability - even comparatively modest hardware (including those L4) can run the model, though on the lower end options you will not be running parallel sub-agents etc.

      Some random results for when you don't want a traditional multi-GPU setup:

        Mac Mini - about 1999 USD, gets you somewhere upwards of 30 tokens/second (depends on quantization and how you run it)
        Framework Desktop - about 2500 USD, gets you somewhere upwards of 25 tokens/second https://community.frame.work/t/framework-desktop-for-local-ai/80880/5
        DGX Spark - about 3500 USD, gets you somewhere upwards of 50 tokens/second https://forums.developer.nvidia.com/t/qwen-qwen3-6-35b-a3b-and-fp8-has-landed/366822/27
      

      Some random results from pulling up random shops and approx. benchmarks, for dual GPU setups (not necessarily NVLink etc.):

        2x Intel Arc Pro B70 - about 1900 USD, gets you around 36 tokens/second, borderline usable, I blame their software stack
        2x Radeon AI PRO R9700 - about 3000 USD, gets you somewhere upwards of 60 tokens/second, usable
        2x Radeon PRO W7800 - about 5400 USD, same as above
        2x NVIDIA RTX 5090 - about 7600 USD, same as above
        2x NVIDIA RTX 5000 Ada - about 9200 USD, same as above
      

      Of course, for those models, some of those cards are way overkill, but you definitely can get something for running local models without too many compromises involved. That said, you definitely will get a worse experience than SOTA cloud models at that 80% and will have to rework stuff quite a bit often, as my own experience with the Qwen model shows - okay for simple tasks, breaks down on complex stuff. For that, you'd want at least some of the 90% category models and would probably need to consider how much memory you can realistically get.

      At least it's not hopeless!

  • >Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.

    Honestly, I don't think it's that cut and dry. Their bet is that the marginal utility of having a smarter model more than makes up for the cost of the additional high-end hardware.

    And honestly, if you look at their frankly insane revenue growth since Opus 4.5 released, they were right.

    >The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail.

    I think we're already past this point, honestly. They lowered usage limits, blocked OpenClaw then tried to remove Claude Code from the $20/mo plan. They have always had low market share for the consumer chatbot market and don't seem to care about catching up to OpenAI there.

  • What about the data they are accumulating, for non-training purposes? That data isn't of negligible value; the "subscription cost" is really a "harvesting data" opportunity. Don't be naive to that our data is not incredibly valuable.

  • > These labs play the game of trying to kill competition in the harness game

    Anthropic and Google are arguably playing that game. OpenAI's Codex CLI is open source and entirely optional for use of the GPT Codex models.

    • OpenAI just has more runway and has convinced its investors that it is as much about hardware (stargate) as it is about anything else. So they think they can/have to afford keeping the software side more open to not make themselves look stupid. Google is more of a down to earth company with other business to lose and isn't bought into it as much.

  • If you were right Anthropic's ARR would be going down but it's not. They just surpassed $30B up from $14B two months ago.

> Before the acquisition, Bun had to figure out how to monetize at some point.

I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point". It is also bizarre that some people are still hopeful despite it being acquired by one of the most enormously unprofitable companies in the most enormously unprofitable sectors of our industry.

  • Are there any situations you would compare this to historically?

    To me, the obvious comparison seems to be Docker. Their tooling revolutionized software development and made cgroups and containerization accessible to the masses. Yet they generally seem to have failed to extract payment from users, even with managed service opportunities.

    It seems to me that there are substantial obstacles to monetizing a project licensed with even a weaker OSS license like MIT. I think this is especially true for projects that don’t have managed service / “open core” potential.

    Any gratis project you rely on runs the risk that it will no longer be provided gratis. That alone is not a strong basis for making decisions.

    • It's a shame that VCs have corrupted a $200MM/year business into the perception as a failure. Who cares if the VCs didn't get a large return, or if the outsized impact of the software didn't quite fully capture the value created. $200MM/yr without aggressive R&D or operational costs could be an incredibly healthy business.

      Maybe we should stop trying to build so many billion dollar/year businesses and work on more sustainable models.

      1 reply →

    • The audio and 3D card pioneers in the PC world.

      The ones that were first to market went all bankrupt, or were acquired by others that came later into the scene.

      3 replies →

  • > I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point".

    Why? What's the risk? It's open source. Also, speaking of open source, we are happy to commit to open source projects that have no monetization, nor any plans to ever monetize.

    • I think parent commenter meant that what's insane is that js runtime is not treated as an utility which should never be monetized. It's as if GCC developers haven't figured out how to monetize, but they are willing to at some point.

  • I partially agree with you, but I also think that it's good that people can make something they want, that seems to have no monetization path, and have some hope of being bailed out.

    It's not great that the search for profit will usually corrupt projects, but the other most common option is that the projects don't exist at all. It's very rare (or it used to be before this year) that someone can do something like this on their own with no compensation. So now at least Bun exists.

    • I'm with you... I think it's helped Node.js a lot to have Bun and Deno implementing new features that help push node forward. I think it's been a bit of a miss not integrating npm into node along the way... Mostly in that npm is a separate org from node, which is its' own issue... I kind of like JSR a lot myself, so hope it continues to pick up some traction.

  • It's a bit insane, but the cost of switching to regular NodeJS is low (for all but most bun-specific projects).

    All valid points though, I'm pessimistic about Anthropic still actively diverting resources to these side quests when tough times hit (which might be in a week for all we know).

  • I know people say it is unprofitable but I wonder if there is a way to verify it is truly is. I will not say any details but I worked for a giant company which was barely making money YoY but somehow the bonuses for heads were bigger and bigger given a proxy metric related to profit.

    There are way too many ways companies arrange to pay themselves and never be profitable to avoid taxes.

    • "Profitable" is the wrong metric, really, it's whether it is sustainable - can development continue indefinitely given the current financial situation?

      2 replies →

You might be underestimating the effect that corporate policies and culture have on the product.

Some teams have a push now to go all in on AI; don't even look at the code. I've seen this in action and the results are probably what you'd expect. Works great at some level, but as complexity accumulates (especially across a team with different "technical vocabularies"), the end result is compounding complexity and mistakes and no person or team knows how the software actually works.

No human testing of software or QA; unit + integration + give AI control over the browser/tool. Yes, this how some teams are moving forward now. So some of this may be that Anthropic's culture will end up causing shifts in how the Bun team operates and thinks.

If this type of culture and mindset becomes the norm, I think either the models have to get a lot better or the software quality is going to decline.

Matt Pocock has a great talk here: https://youtu.be/v4F1gFy-hqg

    "Code is not cheap. Bad code is the most expensive it's ever been. Because if you have a codebase that's hard to change, you're not able to take advantage of all of the bounty that AI can offer.  Because AI in a good codebase actually does really, really well."

Once bad code starts to compound on itself, it's going to be really hard to break out of it.

  • I don't disagree with the notion, but what is up with the dev community championing influencers that work no real jobs and just sell courses where they reread the docs to you at $500 a pop (this gent, $1k a pop)?

    • I have followed a simple rule in my career, if you offer training/courses I don’t listen to anything you say.

      I consider this a hard rule, like ad-blocking (this is exactly that, blocking ads as each talk is an ad (or ad in disguise).

    • I'm not the biggest fan of the influencer community, but I think that it mostly boils down to many learners preferring video content over written material. I've gotten used to reading documentation now, but I remember it being extremely intimidating when I was first learning. It was nice to have someone break stuff down into simple terms for me.

      To be fair to Matt Pocock, I know he worked for Vercel and Stately for a while before doing content full time. I can't say anything about his AI content, but I did some of his free lessons when I was learning TypeScript. They included interactive editor lessons and such, so it wasn't just empty videos and fluff like some of the influencers.

      3 replies →

> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.

Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code. Not for the benefit the JavaScript community at large. Sounds obvious but I guess that has to be pointed out. Outcomes will follow incentives in the long run.

  • Bun is not a "product" at Anthropic though, it's a tool for its developers to build products. IMO as long as it remains that way, the incentives for its developers will remain fairly aligned with the incentives of people who use it outside the company.

    A good example is React. Facebook's interest is that React be performant (website performance is correlated with time spent on said website), reliable (also correlated to time spent), quick to build on (features ship faster) and popular (helps new recruits hit the ground running). That's fairly well aligned with what developers outside of Facebook want too.

    Sure, since Facebook's server is written in Hack it means we'll never get a truly full-stack React, and instead we'll need third parties for the back-end (Next.js, Tanstack Start, etc). But Facebook building react also means it will always be someone's job to make sure this Framework works well in codebases with millions of modules.

    This is all independent of any shitty practices with their other software. And this has been for decades at this point.

    • > Bun is not a "product" at Anthropic though, it's a tool for its developers to build products.

      Doesn't that just make it even worse? If Anthropic can't even afford to spend the engineering effort on making sure their core product functions properly, why should we assume that they'll be investing serious resource into what is essentially some upper manager's loss-leader pet project?

      If Anthropic is financially hurting, why shouldn't they put Bun on the bare minimum of life support?

      2 replies →

  • > Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code.

    I’m unclear about this. What’s the business case? I use Gemini CLI a lot, which runs on Node, and I can’t see anything that would be improved by using a different JS runtime. It’s not something you notice as a user. Node is mature, stable, and perfectly fit for the purpose.

    If Anthropic were public and if these decisions were comprehensible to the average investor, an acquisition like this ought to cause the stock to plummet. Luckily for the people involved, there are no constraints like that in the current market.

This is a good take, and I hope you're right.

One favorable way to phrase it for Anthropic is they acquired Bun because CC and other internal tooling depended on it so heavily and they questioned it's future as purely OSS.

It remains to be seen how things will actually unfold.

I disagree with the overall premise: Before the acquisition, GitHub had to figure out how to monetize at some point. Now, even though their parent company does some shitty practices with their other software (Embrace, Extend, Extinguish, MS Windows), it's a stretch to assume this will also translate into making GitHub worse: Being worried makes sense but I remain optimistic about GitHub.

> it's a stretch to assume this will also translate into making Bun worse

For me it's far from a stretch, in fact it matches closely a pattern that I've seen repeated many times over at this point.

> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.

Can you point to any examples of a company with shitty practices buying one without shitty practices that didn't end up with the shitty practices diffusing through the newly-acquired company within a couple of years?

  • I'm not the parent poster which is why I still stick to looking at the people...

    If you start seeing the people that created bun leaving Anthropic, then I'd probably start to worry. And I haven't seen any sign of that yet.

Funding to pay the core team (via revenue/grants/VC) requires a lot of leadership attention for any independent company that is developing an open-source project as its main activity. Yet more leadership attention goes into other administration (Taxes/hiring/legal/policies/etc.).

I don't have any direct context, though I have run an open-source business (Zulip) for the last decade wearing both the CEO and technical lead hats.

But my simulation is that the Bun leadership team might well be spending 2x as much of their time working on the technology than they reasonably could have as an independent venture-funded company, just because they don't have to do all that other stuff anymore. (There's of course probably a significant bias in that focus towards whatever Anthropic needs from Bun, only some of which other users may care about).

So I agree. Personally, I would not be concerned unless you see the tell-tale signs of the team being reassigned to other priorities at the buyer, which tends to be obvious, because, say, the GitHub project activity falls off a cliff.

Just 3 days later from the blog post, a branch with a potential vibe rewrite from zig to rust surfaced in the bun repo

  • This looks like a vanity project: the value gained switching from zig to Rust is likely to be negative at best, without even the usual caveat devs use of "learned a new skill".

> I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.

Incidentally, Anthropic needs to figure out how to monetize at some point too.

What came to my mind is Windows.

Regardless of what else is going on, kernel is a separate team, and has very strong incentives to remain competent and sane.

Nope. The need to monotize and the fact that an acquihire cost some money is exactly why relying on a specific runtime is where people should have concern.