I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.
Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.
Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.
It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet.
It's interesting how quickly people buy the "abuse" line of thinking. We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription. That is independent of which agent/harness is used. The fair/real price for profitable use is the pay per use token pricing.
These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.
At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.
Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.
It's a big leap to go from "some users may be using large quantities of tokens" to "the labs are burning money on subs in an attempt to kill the competition."
Lots of businesses have subscription programs in which a small number of users are money losers, but which in aggregate make money.
It's not even obvious that the labs are losing a lot of money on even a minority of users; the rate use caps are fairly aggressive for Anthropic, and a cursory analysis of likely actual cost of serving tokens shows they are high margin products at the API level and unlikely to be unprofitable within the usage constraints provided to subscribers.
I do think subscription models make commercial sense because users want predictable costs, and it's a club good in which marginal token cost is zero which helps consolidate their customers' purchasing volume to one provider. But that's a different claim than them serving it unprofitably to kill competition.
Also, they (Anthropic) are transitioning many of their enterprise customers to API consumption billing anyway.
> Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.
I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.
> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.
I dont think this is "understood" or "known" to anyone except Ed Zitron. Subscription plans like Claude Code also have rolling usage limits, it could be profitable. Inference is very cheap and unless you're using OpenClaw no one is actually maxing out the usage window at all times. I'm sure in aggregate the subs are not money furnaces.
> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.
"profit" is a weird concept in the software business. it might be true that there is an opportunity cost to these users, either because they displace other potential users by using up capacity, or because they would be willing to pay more if forced. but I don't believe that anyone is losing money on inference costs on any of their plans.
> At some point they have to price their product fairly
they are competing in a market. if most of their costs were inference then this would be a good thing, because everyone would have roughly the same prices, so as long as they had the best model they would win. in fact model development costs eclipse the cost of inference, and is something that non frontier labs get for much cheaper by distilling from the frontier companies.
> They will have to compete on merit alone, and that is much less profitable.
that's not really true. google won search on merit alone, and were massively successful as a result. the trick is that everyone from the poorest shmuck to the richest businessman uses google, so they win through scale. in ai, google and openai are making a bet that they can do the same thing. there's only really room for one winner at this game, even two is stretching it, so anthropic has to win by being the smartest model that only high end businesses use. that's a very risky bet.
> Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.
As of May 2026, how much money do I need to spend to buy hardware to have a local model that is 80% as good as SOTA services for assisting me in writing code?
As for that 80%, how many minutes per LOC will I be waiting, and how many attempts per query will I be wasting while I wait for it to come up with something sensible?
>Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.
Honestly, I don't think it's that cut and dry. Their bet is that the marginal utility of having a smarter model more than makes up for the cost of the additional high-end hardware.
And honestly, if you look at their frankly insane revenue growth since Opus 4.5 released, they were right.
>The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail.
I think we're already past this point, honestly. They lowered usage limits, blocked OpenClaw then tried to remove Claude Code from the $20/mo plan. They have always had low market share for the consumer chatbot market and don't seem to care about catching up to OpenAI there.
What about the data they are accumulating, for non-training purposes? That data isn't of negligible value; the "subscription cost" is really a "harvesting data" opportunity. Don't be naive to that our data is not incredibly valuable.
> Before the acquisition, Bun had to figure out how to monetize at some point.
I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point". It is also bizarre that some people are still hopeful despite it being acquired by one of the most enormously unprofitable companies in the most enormously unprofitable sectors of our industry.
Are there any situations you would compare this to historically?
To me, the obvious comparison seems to be Docker. Their tooling revolutionized software development and made cgroups and containerization accessible to the masses. Yet they generally seem to have failed to extract payment from users, even with managed service opportunities.
It seems to me that there are substantial obstacles to monetizing a project licensed with even a weaker OSS license like MIT. I think this is especially true for projects that don’t have managed service / “open core” potential.
Any gratis project you rely on runs the risk that it will no longer be provided gratis. That alone is not a strong basis for making decisions.
> I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point".
Why? What's the risk? It's open source. Also, speaking of open source, we are happy to commit to open source projects that have no monetization, nor any plans to ever monetize.
I partially agree with you, but I also think that it's good that people can make something they want, that seems to have no monetization path, and have some hope of being bailed out.
It's not great that the search for profit will usually corrupt projects, but the other most common option is that the projects don't exist at all. It's very rare (or it used to be before this year) that someone can do something like this on their own with no compensation. So now at least Bun exists.
It's a bit insane, but the cost of switching to regular NodeJS is low (for all but most bun-specific projects).
All valid points though, I'm pessimistic about Anthropic still actively diverting resources to these side quests when tough times hit (which might be in a week for all we know).
I know people say it is unprofitable but I wonder if there is a way to verify it is truly is.
I will not say any details but I worked for a giant company which was barely making money YoY but somehow the bonuses for heads were bigger and bigger given a proxy metric related to profit.
There are way too many ways companies arrange to pay themselves and never be profitable to avoid taxes.
You might be underestimating the effect that corporate policies and culture have on the product.
Some teams have a push now to go all in on AI; don't even look at the code. I've seen this in action and the results are probably what you'd expect. Works great at some level, but as complexity accumulates (especially across a team with different "technical vocabularies"), the end result is compounding complexity and mistakes and no person or team knows how the software actually works.
No human testing of software or QA; unit + integration + give AI control over the browser/tool. Yes, this how some teams are moving forward now. So some of this may be that Anthropic's culture will end up causing shifts in how the Bun team operates and thinks.
If this type of culture and mindset becomes the norm, I think either the models have to get a lot better or the software quality is going to decline.
"Code is not cheap. Bad code is the most expensive it's ever been. Because if you have a codebase that's hard to change, you're not able to take advantage of all of the bounty that AI can offer. Because AI in a good codebase actually does really, really well."
Once bad code starts to compound on itself, it's going to be really hard to break out of it.
I don't disagree with the notion, but what is up with the dev community championing influencers that work no real jobs and just sell courses where they reread the docs to you at $500 a pop (this gent, $1k a pop)?
> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code. Not for the benefit the JavaScript community at large. Sounds obvious but I guess that has to be pointed out. Outcomes will follow incentives in the long run.
Bun is not a "product" at Anthropic though, it's a tool for its developers to build products. IMO as long as it remains that way, the incentives for its developers will remain fairly aligned with the incentives of people who use it outside the company.
A good example is React. Facebook's interest is that React be performant (website performance is correlated with time spent on said website), reliable (also correlated to time spent), quick to build on (features ship faster) and popular (helps new recruits hit the ground running). That's fairly well aligned with what developers outside of Facebook want too.
Sure, since Facebook's server is written in Hack it means we'll never get a truly full-stack React, and instead we'll need third parties for the back-end (Next.js, Tanstack Start, etc). But Facebook building react also means it will always be someone's job to make sure this Framework works well in codebases with millions of modules.
This is all independent of any shitty practices with their other software. And this has been for decades at this point.
> Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code.
I’m unclear about this. What’s the business case? I use Gemini CLI a lot, which runs on Node, and I can’t see anything that would be improved by using a different JS runtime. It’s not something you notice as a user. Node is mature, stable, and perfectly fit for the purpose.
If Anthropic were public and if these decisions were comprehensible to the average investor, an acquisition like this ought to cause the stock to plummet. Luckily for the people involved, there are no constraints like that in the current market.
One favorable way to phrase it for Anthropic is they acquired Bun because CC and other internal tooling depended on it so heavily and they questioned it's future as purely OSS.
It remains to be seen how things will actually unfold.
I disagree with the overall premise: Before the acquisition, GitHub had to figure out how to monetize at some point.
Now, even though their parent company does some shitty practices with their other software (Embrace, Extend, Extinguish, MS Windows), it's a stretch to assume this will also translate into making GitHub worse: Being worried makes sense but I remain optimistic about GitHub.
> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Can you point to any examples of a company with shitty practices buying one without shitty practices that didn't end up with the shitty practices diffusing through the newly-acquired company within a couple of years?
Funding to pay the core team (via revenue/grants/VC) requires a lot of leadership attention for any independent company that is developing an open-source project as its main activity. Yet more leadership attention goes into other administration (Taxes/hiring/legal/policies/etc.).
I don't have any direct context, though I have run an open-source business (Zulip) for the last decade wearing both the CEO and technical lead hats.
But my simulation is that the Bun leadership team might well be spending 2x as much of their time working on the technology than they reasonably could have as an independent venture-funded company, just because they don't have to do all that other stuff anymore. (There's of course probably a significant bias in that focus towards whatever Anthropic needs from Bun, only some of which other users may care about).
So I agree. Personally, I would not be concerned unless you see the tell-tale signs of the team being reassigned to other priorities at the buyer, which tends to be obvious, because, say, the GitHub project activity falls off a cliff.
This looks like a vanity project: the value gained switching from zig to Rust is likely to be negative at best, without even the usual caveat devs use of "learned a new skill".
Nope. The need to monotize and the fact that an acquihire cost some money is exactly why relying on a specific runtime is where people should have concern.
Bun has never really been well run. Every feature it had was full of bugs and gaps. And every release fixed a few but broke others.
They released more major features and breaking changes in their last patch release than most software sees in two major versions.
I've been using it just as a script runner and npm package manager basically, and it's incredible the amount of work you have to do to find "good" versions. We've had patch versions suddenly freeze on install more than once, we couldn't upgrade for quite a while due to this. I think they broke postinstall scripts with trustedDependencies entirely two minor versions ago - not a mention in release notes, and somehow no one reporting it in GH issues. In 1.1 or so you could get Bun to do trustedDependency builds in postinstall, and then after that you couldn't. I looked around for release notes and saw nothing mentioned. It's been broken for months.
There's a GitHub issue for the freeze thing. Their security scanner passes the full dep list as CLI arguments, large monorepo on Linux and you blow past ARG_MAX. Spawn silently hangs, no error, --ignore-scripts doesn't help because the scanner is separate from postinstall. Been broken since 1.3.5 at least.
Why people use Deno and Bun over Node? I think it's neat that there are competitors for JS runtimes, but I really don't understand what advantages I'd get by swapping to one of these over Node. Bun has no REPL and worse JS engine, Deno is just Node with a restrictive, annoying permission system and no sqlite. Both claim better performance, but that only seems true in cherrypicked benchmarks, and in my tests (granted about a year ago at this point) both alternatives under-performed Node in my workloads. What am I missing?
EDIT: Actually I just remembered I delivered a small ERP tool to a business a while back and I did opt to use I think Bun for that because it had the most robust tools to wrap a project into an `*.exe`, that was definitely a better experience than Node. Though since that was dependency-less JS I did the whole thing using Node and then just shipped it with Bun.
I switched to Deno because it is the only option out of the 3 that allow monorepo workflow without building .d.ts files. Bun and Node both do type stripping or compiling of TS, but it only works for the entry package of the running script, not any of the linked dependencies from the same repo.
There are still things I dislike about Deno, but it really does make package development a lot simpler. JSR is a great upgrade from NPM, and Deno makes it so simple to publish to both NPM and JSR. Strict IO permission system and WebGPU support are also nice to have.
> wrap a project into an `*.exe`
Deno makes this simple too. Though that's where it's bundling features stop. Honestly I am okay with that, I'd rather use Rolldown or Vite for web or library bundling.
Deno has been great for wrapping the dozens of REST API's I need to use in the world in MCP. The no compilation thing means that I can push and it's literally deployed in seconds. I run several dozen of the little servers for various use cases, it's a very cheap way to build an automatable life
> Both claim better performance, but that only seems true in cherrypicked benchmarks, and in my tests (granted about a year ago at this point) both alternatives under-performed Node in my workloads.
1) You need to retest again, mainly because Bun's own native tools should be faster than Node's.
2) My experience is the opposite: For the niche uses I'm on, the rendering process is done 2-3x faster with only a few changes to use Bun's tools.
In what particular way? I've been using Typescript a lot more recently (unfortunately XD) and I've found the native experience in Node to be totally fine.
> Deno is just Node with a restrictive, annoying permission system
I find Deno's permission system amazing! (although I didn't stick with it until v2)
Everything is closed by default but you're able to write code like normal.
Whenever it needs a permission the code pauses (like `debugger;`) and the terminal asks you "hey, should this script have access to this file/folder"?
- You say yes and the code continues (no need for exceptions).
- You say no and the code stops.
Then after your program has run, you put only the answers you said yes to in a deno.json file and it never has to ask again.
---------------------------------------
I'm currently working on a project that takes in heap of files from one one set of devs, processes them with a heap of files from another set of devs, then compiles and outputs the final product.
The file structure goes like this:
1. Group one devs
2. Group two devs
3. Build output
4. Compiler
So group one only works in their folder, and group two only works in their folder, but needs to see group one's folder.
With Deno it's stupidly easy to do stuff like:
- Scripts in group one only have file read access to group one.
- Scripts in group two only have file read access to group one and two.
- Scripts in the compiler only have file read access to group one and two's folders, only have file write access to build-output folder, and can read the env file in the project's root directory.
- One specific file is only allowed to access a specific URL and port
- Another specific file is only allowed to use the FFI to access a specific shared object.
I don't need to worry about a dev's script accidentally using the wrong file because they messed up the path.
I don't need to worry about a dev accidentally overwriting a file and losing data.
I don't need to worry about a dev blindly going down the wrong road because an LLM convinced them to.
I don't need to worry about a dev using LLMs agents that are trying to make the project do something it's not supposed to do.
I don't need to worry about a dev including a dependency that's doing what it shouldn't be doing.
I don't need to worry about the equivalent of `rm -rf ./$BUILD-OUTPUT` but the env file wasn't set up correctly and $BUILD-OUTPUT is empty/undefined evaluating to `rm -rf ./` and nuking the project's root.
I don't need to worry about supply-chain attacks.
I don't need to worry about namesquatting attacks.
There's so many things I don't need to worry about.
It's such a breath of fresh air.
It's just: you guys read from here, other guys read from here, the compiler writes to here.
Whenever something doesn't fit, the program stops and tells you what file is trying to access what permission.
---------------------------------------
aside: Node added a permission system but it's completely broken by design. Everything's open and you have to manually close each permission yourself. Oh, you don't want this project to have file write permissions? Lets just turn off the file write permissions (and forget to also turn off the subprocess permissions to spawn a shell which rm -rf's the wrong folder).
otherwise, bun has a big "batteries included" thing going on.
For instance,
- Bun.$ to run shell commands
- an entire redis client at Bun.redis
There are dozens of other examples like this
For rapid prototyping, complex glue scripts, etc. it's an absolute joy to work with. There is often no reason to pull in any dependencies to accomplish what you want.
I work on Bun, and this post is confusing to me. Me personally and the Bun team continues to dogfood & make Bun better everyday. Our development pace has only gotten faster. Bun's stability has improved significantly since joining Anthropic.
Here are some things shipping in the next version of Bun:
- 17 MB smaller Windows x64 binaries [0]
- 8 MB smaller Linux binaries [1]
- `--no-orphans` CLI flag to recursively kill any lingering processes spawned [3]
- SSL context caching for client TCP & unix sockets, which significantly reduces memory usage for database clients like Mongoose/MongoDB [4]
- Experimental HTTP/3 & HTTP/2 client in fetch [5]
- Experimental HTTP/3 support in Bun.serve() [6]
- Bun.Image, a builtin image processing library [7]
(Along with several reliability improvements to node:fs, Worker, BroadcastChannel, and MessagePort)
The Anthropic acquisition also means Bun no longer needs to become a revenue-generating business. We are very incentivized to make Bun better because Claude Code depends on it, and so many software engineers depend on Claude Code to help get their work done.
Acquisitions in this industry tend to lead to a certain inevitable conclusion. The software that has been acquired gets worse as the original team members cash out and their culture is replaced with the culture of the new owner.
Perhaps Bun will be the exception, but you can't say that the concern is unfounded.
The CEO of Anthropic has a habit of making outlandish predictions about how AI is so very close to replacing human programmers. Anthropic has been applying this belief to Claude Code and it has become a giant heap of unmaintainable spaghetti.
Hasn't your team shrunk a lot? Word on the street is that many of Bun's employees left or let go in the time leading up to the acquisition. How many people are left working on Bun?
Has development velocity increased because you are merging large quantities of unreviewed LLM generated code? If so, I would be very worried about future stability if I used Bun.
Saying that you “work on Bun” is such a radical understatement. I have my reservations about Anthropic, but I don’t see how Bun could go wrong with you at the helm. And I’m sure that you are putting the stability and funding of a larger organization to good use :)
I’ve been a Bun maximalist since the beginning. Thank you Jarred!!!
Perhaps it could go wrong because he uses AI robots to generate responses to issues on claude code that are also generated by AI robots? Just bots talking to each other like moltbook. It shows a level of AI maximalism that is absurd, concerning, and funny. But probably par for the course for someone working at Anthropic. I can imagine being surrounded by people doing similarly foolish things only encourages the foolery.
The best feature Bun delivered recently is portable binary. That portability is a huge deal to me as my users are often on ancient Linux distros. Thank you. Both node and deno require recent Linux, more exactly, recent glibc.
I think velocity is a real risk to stability, dogfooding or not. That's what made me swear off the python transformers library. It's doubtful that LLMs will change that calculus for the better.
Hey Jarred, first of all, thanks. I've been doing backend JS since the first release of node and bun is genuinely the first really big improvement in terms of DX. It's an absolute delight to build glue and scripts with... Bun.* just seems to have everything I need. Bun.$ is revolutionary. etc. etc. I'm hoping to run a collection of backend services on it in the near future but it seems the general consensus is that there are still some gremlins holding it back (memory leaks, etc.)
Can you shed a little light on the recent giant rust based commits though? Are you guys moving away from zig? These kind of big curious movements and the spectre of giant LLM-based commits are not exactly confidence inspiring.
I just spent a couple hours migrating my knife sharpening website backend from Bun to Node. Feels good to avoid that lock-in. I was initially gung-ho for Bun but increasingly unsure about it. Things I'll miss for sure:
- Querying sqlite with tagged template literals
- Bun.password.verify being argon2 is a better default
Why not just write a small helper library to add back the features you miss? Node includes SQLite and Argon2 at least, if the issue is the interface then that is easily fixed.
I agree with OP, and understand why to some it feels premature.
We live in a vastly different world than before, where people are more conscious of ethical concerns and willing to stand on their ground to avoid repeating past mistakes.
It might be premature from a tech standard, but it makes sense from an ethical concern. I don't think misconduct is as easily backtracked as it was before and preemptive measures are needed to avoid the large impact that those decisions make.
> where people are more conscious of ethical concerns and willing to stand on their ground to avoid repeating past mistakes
Would be interested to hear what makes you say that. I don't see anyone being conscious of ethical concerns more than they were before. I can see slightly more BDS people, for example, but outside of that not much.
Given the complaints about Firefox and Safari not adopting Chrome OS Platform APIs, and shipping Chrome all over the place, I am not sure about people standing on the ground and ethical considerations.
I don't think bun worked well before the acquisition. Don't get me wrong, i used it all the time for little scripts, but i would never ship a service at work on bun. Between memory issues and incompatibilities that never get fixed, it is a nice toy to me that did a great job of exposing room for improvement in nodejs.
For example, i'd been following this issue https://github.com/oven-sh/bun/issues/14102 and eventually all the libraries shipped "if bun do x" into them, which is the opposite of compatibility.
Yes, I've tried to run it in production on a couple projects. I had to back out from bun to node on both. One, there were huge memory leaks like you mentioned. The other, there were API differences that threw errors, in TextDecoderStream and such. Decided I won't try again until bun v2.
The author closes by enumerating some of the things they like about Bun which are not included in pnpm. The list is basically: native TS support, a vite-style bundler and a vitest/jest style test runner.
Other than a bundler, Node already has all of these. Different test runner syntax maybe but otherwise TS "just works" out of the box and their built in test runner is totally capable. Not sure I see the need for such a lament over Bun.
To be fair, Node didn't have any of these things until Deno & Bun challenged it. Deno didn't seem to move the needle by itself very much for whatever reason, but Bun's existence has had a tangible effect on the Node Technical Steering Committee. I would even argue that much of the current impetus has been driven by Jarred Sumner's savvy social media marketing. It got people talking, and Node is better because of it.
Additionally, Bun's push for covering as much of the Node API as possible has pushed Deno towards the same level of compatibility, and now most code is basically runtime agnostic. I'm not sure if I'll ever actually use Bun in production, but I'm glad it exists because the JavaScript ecosystem has been much improved simply due to its existence.
I would too ... but not as the winning competitor.
For their first year two of existence, bun tried to do npm, but better. For the first year or two of their existence, Deno tried to reinvent npm.
The key result is that after that first year or two Deno had to walk back their decisions, to create a Node-ecosystem-compatible tool .. and as a result, they're now significantly behind bun (at least by all metrics I've seen).
This is cool! But AFAIK bun promises to be a one-stop-shop for all your JS/TS dev needs, while Perry is "just" a compiler from Typescript to native executables.
Bun is basically a lost case in the same way that claude code is. Many people love both and thats fine as long it is their personal choice. But freedom loving orgs can never ever depend on either for anything ever or build anything on top. The main problem in this picture is projects who don’t understand or remember the internet explorer hell are using bun only features to a degree that you cannot just switch. Even opencode who should have the biggest aversion to depend on anthropic is doing this and it drives me insane. Have you all lost your mind or let your agents blindly lock you into whatever hell wants to own you?
Why do people want this? Shipping constantly is how software breaks. You want tools that are good and stable, not constantly churning. I wish software developers would wake up to the idea that velocity is not a marker of quality.
Careful, focused work can easily sustain daily, or almost-daily, shipping. We've been doing it for decades without LLMs.
LLM-brain is pushing people into continuous by the hour shipping and it is absolutely unecessary and creating code at a rate that cannot possibly be kept up with in relation to quality, performance and security.
But are you saying the harness is driving you insane? Or the model? Because Bun is,only the harness, and that part has been improving over time if you stay on the stable channel
Does bun have a formal roadmap? I occasionally see some the changes that Jarred posts on X, and I wonder if they're really meaningful or not (perf improvements are always good). It also seems like a lot of the recent contributions are ai authored.
I tried using bun for a project earlier this year and learned that you can't use testcontainers(works fine w/ Deno).
I dont think so, but recent release includes a terminal markdown renderer built-in which means, even if handy, most of the focus is to make Claud Code great. I am not worried though, at least no yet.
> - too much ai chatter. so many examples of it failing to work. ill prove it by showing the most recent ai-generated pull request. yep, it’s failing.
i will admit that my feedback on the above items were not very loud, but there has been no attempts to correct this vision.
One thing is sure. Claude has become terrible. Criticize any code Opus 4.7 created and it starts a blame game. Also. It denies that a version 4.7 even exists. Will look into moving back to ChatGPT that I quit because the mandatory spyware bs they added which I believe they nuked.
I still don't think Bun is production ready.
We just ripped bun out of a bunch of our production sevices. CPU runaway and memory leaks. All solved by switching back to nodejs.
This post seems to "throw doubt" on Bun, based on the OP's experience of Claude Code. But this seems unnecessary indirect. It's not like Bun is hidden software: it's open source and actively developed.
So the more direct question would be: How has Bun actually been since the acquisition?
From what I can tell they have been responding to users as fast as before, and improving the product as well as before.
I made this exact same decisions (bun -> pnpm) for similar reasons, mostly bc I didn’t like how haphazardly a core part of the stack was being vibe coded. Too many changes too quickly for something that’s supposed to be stable
Why did you have to stop using Cursor? I ask this as someone that uses Cursor, but recently at a conference I heard it referred to negatively several times - but in a very vague sense. I don't really have a dog in the fight, I'm using it because thats what the other dev I work with is using.
There is the SpaceX acquisition rumor, but that's not why.
I only use Cursor through the CLI, and while the UX of the CLI is pretty bad, I've found their harness (the prompts they use and orchestration of LLMs) to be nothing short of incredible. I can't comment on their agent development environment given I haven't spent a lot of time with it.
The reason I'm moving away from Cursor is cost. Unfortunately, if you want to use the SOTA models from both OpenAI and Anthropic you basically have to go direct through their subsidized plans.
I agree with your assessment that the harness is incredible and so I get a ton of mileage out of Auto + Composer 2. This is my workhorse.
Admittedly, with Opus 4.6+, GPT 5.5 I just haven't used them much and as I gain more experience I can see what the hype is all about. But to me, the answer isn't $200 max plan, it's bifurcating the work. Call me a spendthrift!
I personally switched back to vscode as I started using Claude and Opencode more for the AI flow, and I didn't see much added value any longer. Also, I was incredibly frustrated that they decided to hide the close button and finally, there were weird issues with editor groups spawning at unwanted times. They might be able to fix it, but I felt that they were starting to reach the limits of what you can do with a "live fork".
OpenAI and Anthropic both are destined to doom for sure. There's no way around it and it is all in the math. Bun would be a causality. It is only a matter of time.
Only company that would survive the AI race - the one where the current wave was actually invented along with the research paper, the libraries and even specialised hardware: Google.
Google has a serious problem with its product management culture (long list of products and projects, people even skeptical of Flutter) otherwise they would have surpassed Anthropic long ago.
Google seems profoundly uninterested in the agentic coding world though. gemini-cli is underwhelming, Antigravity not super compelling, and the Gemini model itself absolutely terrible and non-competitive in basic tool use necessary for coding, even inside their own harnesses.
It's fine for other purposes though. Which are arguably a much larger and lucrative market.
TBF, I really haven't done much of anything with Bun other than occasional module testing. I mostly use Deno for my day to day, including a lot of shell scripts the past few years. I liked the newer ergonomics a lot, direct module references in repositories is really nice for shell scripts.
That said, I'm worried about them having good enough monetization while keeping features open... or at least able to be replicated by others. So I can understand some of the concerns.
Bun is basically a wrapper over JSCore. I don't think it's that big of a feat. Furthermore they are heavily invested in vendor specific APIs which I think is not good.
This isn't anything new and I feel the same way about Deno. We can argue about exactly how much trouble any runtime is in today vs yesterday vs tomorrow but VC funding of a javascript runtime feels inherently unstable to me.
The key question is how much unique tooling you're relying on. If you can switch to Node tomorrow, great. If you can't, make sure you have a contingency plan.
If not VC funding, then what? Volunteer work? So other people can make money off it?
Our industry has no answer how to fund infrastructure.
You've got FAANG companies using open source projects built by volunteers and doing meagre grants every once in a while, not nearly enough to pay a SWE salary. A smattering of hard to get grants from NLnet, etc. And then places like Anthropic or Grok or OpenAI "buying" open source teams to pull them inside, which inevitably leads to drama.
I don't know what the answer is, but there's a serious issue here. Similar situations in the 80s were why the FSF was founded and the GPL established. (Not to fund, but to protect the rights of authors and users)
pnpm is even worse. There is no way to bootstrap it without binary blobs making it an easy target supply chain attack waiting to happen that could hide in plain sight indefinitely.
I think the motion that Claude Code and Anthropic has is trying to force-hide stuff from you. Some hopefully remember the shitstorm that happened, when they changed. Reading xxx.yy to reading 1 file or reading 2 files.
More changes like this came and they were not or very hard to configure. I understand the business idea behind it. Make them to use AI as much as possible, get the human out of the loop. More training data. More Token Usage JUHUU.
However I think that made Claude Code so much worse and so much more untrusthworthy. It’s a sneaky attempt to take away the driving wheel from you. And if you follow that logic, way more and way more things seem reasonable.
But mainly for now it just generated a lot of distrust for me
Aube[0] seems interesting to me, I have submitted it as show HN after hearing about your post. Its created by the same person who has made mise and I actually discovered it when I was browsing through on mise.en.dev website
I still use bun, but I think that there are some other pathways so I am not that worried about myself personally. But that's also because I most often than not code in golang rather than typescript/javascript
What is there to worry about? If we believe AI crowd, Bun and entire JS ecosystem is done for. Dead. Nothing to worry about since nothing's left.
If as claimed everyone and his malnourished cellar rat can whip up a SaaS on a whim, then why that SaaS should be built upon chromium+js+http instead of tcp+native ui?
Remember, choice of ui is no longer a constraint. Nothing is a constraint or so they say.
So it follows that all this javascript stuff can at last die.
I still see no monetization with Bun and Deno to keep them going.
You see this all over the place with other programming languages.
The ones that have bleeding edge features do so, because there are companies, or universities (for their PhD and Msc thesis), that invest into those ecosystems.
In the end nodejs will keep improving, with Microsoft and Google's baking, and that will be it.
All these complaints about Claude code are mostly resolved if you pay for your usage with direct API pay as you go. It’s not cheap but nearly all the complaints I see about Claude code are due to the fact the subscription plans seem unsustainable from a cost perspective.
> But from the outside, Claude Code looks like a tool moving in the wrong direction. More restrictions, billing weirdness, surprise behavior based on text in commits. That is textbook enshittification.
I've never used Claude Code, but this person doesn't understand what "textbook enshittification" means. "Enshittification" is a feature of certain kinds of business models, progressing through the following stages:
1. Giving away a product free to users, subsidized by venture capital, to gain a monopoly
2. Switching to advertising, then abusing users on behalf of the real customers, advertisers
3. Using monopoly power to abuse real customers (advertisers) to extract as much money as possible
Anthropic's business model doesn't have a "user / customer" dichotomy; their paid users are their customers. And they don't have a monopoly they can use to extract money yet.
ETA: In other words, "Enshittification" isn't just random; you're making the user experience worse in order to make advertiser experience better; and then making advertiser experience worse in order to extract maximum profit. The only complaint that could vaguely be related to profit is the OpenClaw stuff, and that's entirely due to trying to keep the "all-you-can-eat" model for non-OpenClaw users, rather than having to switch everything to metered.
Mostly in my day to day routine, where is use Claude Code maybe 90% of the time, I don’t see that it’s become that bad. Yes they’ve made some questionable decisions on API usage and OpenClaw but I feel like this post is making it out to be worse than it is.
That being said I’ve been worried about the future of Bun anyway. Especially if the AI bubble pops. Then again, it’s open source.
> Will we see issues start popping up in Bun that make it seem like the team doesn't even dogfood their own product? I don't know, but I'm not sure I want to continue using it just in case.
I sympathize with the general premise. The reaction to move away seems pre-mature though.
It sounds like `bun` is still performing just as well as before, and this sentiment isn't based on concrete changes. I also wouldn't expect infrastructure like `bun` to evolve in the way a consumer-facing product, especially one scaling as quickly as Claude Code, can.
I’m confident that any unhappiness with Claude Code is at least 95% downstream of Anthropic seeing demand scale their revenue by ~3X in 6 months from a $multi-billion annual base.
Their product focus, roadmap, or execution is likely a rounding error in the face of that tsunami.
Frankly, it’s shocking they’re doing so well relative to, say, GitHub.
So who controls NodeJS? https://openjsf.org/governance has Microsoft as the chair. And Microsoft owns npm. It's kinda hard to avoid a corp controlling these tools.
The author seems more focused on the thing where Anthropic fights OpenClaw usage unless you have the right billing set up for that. Frankly I just don't care about those complaints, all the LLM services want you to set up a non-subsidized billing method to use OpenClaw because it uses lots of tokens. It doesn't mean they're going to crap on Bun.
The only reason I don't use Bun is I never ran into a situation where Node didn't cut it. Even though my least favorite tech corp controls Node.
I used to be a fan of Bun, but the way it keeps adding bloat makes me seriously doubt its future. Also, it seems like they are doing a lot of vibe coding without taking enough time, which raises other questions.
Node.js is also more stable, and it has started supporting TypeScript out of the box. I don’t think Bun will have many advantages after Node 26.
> and it has started supporting TypeScript out of the box
Node only does type stripping though. If you want proper TS support you still need a compiler.
> I don’t think Bun will have many advantages after Node 26
There are tons of advantages. For instance, Bun includes a lot of features that would need a third party dependency in Node: db driver, S3 client, watch mode, bundler, JSX support, etc.
Why would you want DB drivers and S3 clients in your runtime? That’s exactly what 3rd parties are for, you don’t want to have to update your runtime for a new version of your drivers
I wonder why Anthropic chose to spend money on Bun when they could have easily spend that resource on Go which is fairly easy to use and fast. I'm sure their SWEs could easily everything things in Go. Anyone have insight on why?
If I had to guess, it comes down to speed of iteration. Claude Code is built on JavaScript, so Bun aligns well with their current stack.
Switching to Go or Rust would only make sense if performance were the main priority, which doesn’t seem to be the case. Their current setup lets them ship quickly. A rewrite in Go would likely slow that down.
Codex moved to Rust, and you can see the trade-off. Performance improved, but release velocity dropped. They’re also still catching up to Claude Code, so they don’t face the same pressure to ship as fast.
JS is used because it's (still) the only code you can run in a browser. Although node and bun are regular OS processes, their use/popularity traces back to that browser environment one way or another.
My guess: JavaScript runs in the Browser as well as on the OS. That way you can train a model to be able to interact with both fairly simple. You can also see that their harness, claude-code is also written in js. So I guess they are quite invested in that language anyway.
Yeah, it's the same pattern you saw in the early react days where open source devs would try to "woo" the react core team into getting recognition to sell consulting services or courses.
The bun people likely have some fucked up incestial business relationship with some >dev manager at Anthropic and the same pattern is repeating. Only this cycle it's going straight to acquisitions, which honestly seems like a worse strategy and Anthropic will def can the bun engineers in less than <3 years or whenever they face an actual budget crunch that they can't stave off with more gulf money.
I’m wondering why Anthropic, who has “the most powerful, hold me bro, AI in the world” just didn’t vibe code their own, better, version of bun? haven’t Dario said that coding is cooked in 6 month, like 12 months ago?
Ironic that this comment is in thread advocating for usage of Go:
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike"
So Bun is going to become a fully vibe-coded codebase, with important details lost in translation.
I’ve been a huge supporter of Bun, but now I’d be extremely reluctant to deploy it in production.
It’s also a bit disappointing to see Jared change his mind so quickly. He’s an incredible developer with deep knowledge of how to write clean, maintainable, efficient code. But now it feels like his talent is being sidelined, and Claude has been given full control over the codebase.
Claude Code itself seems to be built that way: they keep piling on new features every day, but it has become this big, bloated Frankenstein slug.
Bun used to be a small, elegant, clean codebase. Now I’m worried it may turn into an unreliable mess.
Let them cook. Anything that they can do to get rid of the absolute hell that is dependencies in the JS ecosystem is worthwhile. I really don't care what they add as long as it's maintained
I see the word “enshittify” being thrown around casually about Claude Code. We’re far from that part of the Enshittification cycle still. This is just a mismanaged product and the result of an extremely competitive market that moves too fast.
Never attribute to malice that which can be adequately explained by incompetence, etc.
Yeah, I'm none too happy with anthropic right now, but what's happening to Claude code is just your typical garden variety mismanagement of a project that grew way too fast for its owners to reasonably handle.
Technically, no, not textbook enshittification. Enshittification was originally meant to refer to companies squeezing two-sided markets, not products just getting kinda worse.
Personally, I suspect that Bun is a Silicon Valley attempt to lock some companies into its stack (similar to what cloud providers, Next.js + Vercel do). Especially now that Anthropic has become an owner, I'll be keeping Bun at a considerable distance.
The funniest part to me is that 10–15 years ago, companies were stuck in the development process due to binary (closed) dependencies. Now they're jumping into the same trap under a different name.
Maybe I’ve missed some scandals, but so far OpenJS Foundation is the best thing that has happened for the JavaScript ecosystem.
What an utterly baffling post. Moving away from a tool that you love proactively because you're concerned it might degrade in quality at some time in the future? Ok man whatever.
The issues with Claude Code lately look to me like symptoms of being part of a service that is experiencing insane growth (fastest growth in history, by far [1]), while being severely constrained on adding capacity (GPUs are hard to get quickly right now, even if you have the money). I assume they're constantly fighting fires trying to keep the core use cases of Claude Code working, even if that means limiting OpenClaw usage in somewhat draconian ways.
It's annoying, but I don't see this as a bad thing at all for Bun.
No, all the issues are symptoms of trying to slop-code a functional product. Anthropic has admitted they dogfood heavily, and issues like [1] from the article could only be caused by a text generator.. I refuse to believe Anthropic employees are that stupid.
Here's how I evaluate whether I'm going to use a given bit of OSS:
- Is the project important to me or can I replace it? If the latter, I'm more likely to allow failures of other criteria. If not, I need to be more strict. Bun is easy enough to replace if something were to happen to the project. Easy come, easy go.
- Are there any red flags in financing that could become problematic? Many VC funded OSS companies fail this test for me. What happens when they don't make it? What happens post IPO if they do? What happens when they get acquihired? Mostly that's up to share holders, not developers. Most VC funded companies actually don't make it and that's normal in the VC world. A few companies make it, everything else fails quickly. And there are a few examples of projects that have changed licenses under pressure of shareholders. That's why this is a red flag to me. I've used Redis and Elasticsearch, for example. And I switched away from Mongo before they changed the license. I used Terraform before they open sourced. All negative examples here.
Bun initially wasn't great on this. But the Anthropic acquisition has improved things a bit. It's still a risk. But it's unlikely they have any plans for Bun other than just keeping it alive by employing the main people working on it. Anthropic itself might still fail of course.
- Has the project been around for a long time. If so, it likely has a stable community and funding. There are no guarantees but the older the better. Bun is pretty newish still.
- Is the project stable and under active development? If it's stable because nobody makes changes anymore that's usually not a great sign. If it is stable despite a lot of active development, that's really positive. It means somebody competent is in charge. Bun seems pretty good on that front.
- Is the project otherwise structured right to be future proof. For me future proof is a combination of contributor community, commercial activity, and licensing. The more diverse the contributor community the better. If there are multiple companies sponsoring and making money of a project, that makes it less likely that a single one can hijack it for their own good. This is more common with permissively licensed software (but there are exceptions). Bun doesn't have much commercial activity around it and the regular contributor community is tiny. One person seems to be doing most of the work with only a handful of notable other contributors that are probably all Anthropic employed at this point. Out of these, the dependence on a single person looks the most problematic to me.
So, the overall score for bun is not perfect (there a few potential red flags) but I'm happy to risk using it because it's not that critical to me and easily replaceable.
My read of the whole Anthropic acquihire is that it is an improvement over the starting point which was a VC funded company that was probably going to fail otherwise. Otherwise, good tech and generally nice to use. I could see Anthropic going bad and this project surviving in one form or another. So, that doesn't have to be a show stopper.
AI tech bros when the bubble is blowing up lol. LLMs where always bad and unreliable for programming due to their un-deterministic nature. And regarding pricing policies, yeah, they have to make a profit somehow, it's just that the tech is too expensive vs the actual (almost non-existent) ROI it generates. Let's see if people are willing to pay when these guys aren't subsidized by the delusional VCs anymore...
tl;dr: I have concerns. Not because Bun is bad. Bun is great. It is not bad. But Claude was good. And now it is bad. Bun is owned by Anthropic. Transitive property. Maybe. I hope I’m wrong.
I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.
Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Especially given the context of both of these different context: Claude Code is a gem of Anthropic, experiencing extreme growth and where any of its change can result in billing issues.
Bun is a JS runtime, and regardless of its growth, can focus on being the best runtime possible: It doesn't impact billing nor the bottom line of Anthropic, so they don't have to rush out patches due to abuse unlike CC.
It's unclear how it will pan out over the next years, still very early on the acquisition to see if anything will change, but I'm not concerned just yet.
It's interesting how quickly people buy the "abuse" line of thinking. We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription. That is independent of which agent/harness is used. The fair/real price for profitable use is the pay per use token pricing.
These labs play the game of trying to kill competition in the harness game (because third party harnesses risk commoditizing the underlying LLMs once they are all good enough), while playing a game of chicken with each other how long they can burn money that way before they have to give up.
At some point they have to price their product fairly, and the only hope they have is to have killed all competition by then, which is of course a game that they seem to be loosing. Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.
Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed. The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail. They will have to compete on merit alone, and that is much less profitable.
It's a big leap to go from "some users may be using large quantities of tokens" to "the labs are burning money on subs in an attempt to kill the competition."
Lots of businesses have subscription programs in which a small number of users are money losers, but which in aggregate make money.
It's not even obvious that the labs are losing a lot of money on even a minority of users; the rate use caps are fairly aggressive for Anthropic, and a cursory analysis of likely actual cost of serving tokens shows they are high margin products at the API level and unlikely to be unprofitable within the usage constraints provided to subscribers.
I do think subscription models make commercial sense because users want predictable costs, and it's a club good in which marginal token cost is zero which helps consolidate their customers' purchasing volume to one provider. But that's a different claim than them serving it unprofitably to kill competition.
Also, they (Anthropic) are transitioning many of their enterprise customers to API consumption billing anyway.
7 replies →
> Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.
I thought the prime bet was that the winning lab who reaches takeoff through recursive self improvement will make a galactic superintelligence. Not saying I believe this but the people running the labs do. Under this scenario if you are a few months behind at the pivotal time you might as well not exist at all.
38 replies →
> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.
I dont think this is "understood" or "known" to anyone except Ed Zitron. Subscription plans like Claude Code also have rolling usage limits, it could be profitable. Inference is very cheap and unless you're using OpenClaw no one is actually maxing out the usage window at all times. I'm sure in aggregate the subs are not money furnaces.
2 replies →
> We understood (and knew for a long time) that the large AI labs are not monetarily profiting from subscription users that make heavy use of their subscription.
"profit" is a weird concept in the software business. it might be true that there is an opportunity cost to these users, either because they displace other potential users by using up capacity, or because they would be willing to pay more if forced. but I don't believe that anyone is losing money on inference costs on any of their plans.
> At some point they have to price their product fairly
they are competing in a market. if most of their costs were inference then this would be a good thing, because everyone would have roughly the same prices, so as long as they had the best model they would win. in fact model development costs eclipse the cost of inference, and is something that non frontier labs get for much cheaper by distilling from the frontier companies.
> They will have to compete on merit alone, and that is much less profitable.
that's not really true. google won search on merit alone, and were massively successful as a result. the trick is that everyone from the poorest shmuck to the richest businessman uses google, so they win through scale. in ai, google and openai are making a bet that they can do the same thing. there's only really room for one winner at this game, even two is stretching it, so anthropic has to win by being the smartest model that only high end businesses use. that's a very risky bet.
> Useful models are getting smaller and cheaper to run every year and it has hit a threshold at which we will see continued development of third party harnesses even without the userbase of subscription users.
As of May 2026, how much money do I need to spend to buy hardware to have a local model that is 80% as good as SOTA services for assisting me in writing code?
As for that 80%, how many minutes per LOC will I be waiting, and how many attempts per query will I be wasting while I wait for it to come up with something sensible?
1 reply →
>Basically the prime bet that they made (that one needs extremely expensive hardware to have useful AI) has already failed.
Honestly, I don't think it's that cut and dry. Their bet is that the marginal utility of having a smarter model more than makes up for the cost of the additional high-end hardware.
And honestly, if you look at their frankly insane revenue growth since Opus 4.5 released, they were right.
>The secondary bet that they can lock users into their ecosystem (which requires them to subsidize their harness via unprofitable subscriptions burning their capital) and be able to monetize that later will also fail.
I think we're already past this point, honestly. They lowered usage limits, blocked OpenClaw then tried to remove Claude Code from the $20/mo plan. They have always had low market share for the consumer chatbot market and don't seem to care about catching up to OpenAI there.
What about the data they are accumulating, for non-training purposes? That data isn't of negligible value; the "subscription cost" is really a "harvesting data" opportunity. Don't be naive to that our data is not incredibly valuable.
> These labs play the game of trying to kill competition in the harness game
Anthropic and Google are arguably playing that game. OpenAI's Codex CLI is open source and entirely optional for use of the GPT Codex models.
1 reply →
If you were right Anthropic's ARR would be going down but it's not. They just surpassed $30B up from $14B two months ago.
The thing is, the harness _is_ the model at the end of the day:
https://en.wikipedia.org/wiki/Turtles_all_the_way_down
3 replies →
> Before the acquisition, Bun had to figure out how to monetize at some point.
I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point". It is also bizarre that some people are still hopeful despite it being acquired by one of the most enormously unprofitable companies in the most enormously unprofitable sectors of our industry.
Are there any situations you would compare this to historically?
To me, the obvious comparison seems to be Docker. Their tooling revolutionized software development and made cgroups and containerization accessible to the masses. Yet they generally seem to have failed to extract payment from users, even with managed service opportunities.
It seems to me that there are substantial obstacles to monetizing a project licensed with even a weaker OSS license like MIT. I think this is especially true for projects that don’t have managed service / “open core” potential.
Any gratis project you rely on runs the risk that it will no longer be provided gratis. That alone is not a strong basis for making decisions.
6 replies →
> I think it is insane that people got into a situation where they had committed to a javascript runtime that had to "figure out how to monetize at some point".
Why? What's the risk? It's open source. Also, speaking of open source, we are happy to commit to open source projects that have no monetization, nor any plans to ever monetize.
1 reply →
I partially agree with you, but I also think that it's good that people can make something they want, that seems to have no monetization path, and have some hope of being bailed out.
It's not great that the search for profit will usually corrupt projects, but the other most common option is that the projects don't exist at all. It's very rare (or it used to be before this year) that someone can do something like this on their own with no compensation. So now at least Bun exists.
1 reply →
It's a bit insane, but the cost of switching to regular NodeJS is low (for all but most bun-specific projects).
All valid points though, I'm pessimistic about Anthropic still actively diverting resources to these side quests when tough times hit (which might be in a week for all we know).
I know people say it is unprofitable but I wonder if there is a way to verify it is truly is. I will not say any details but I worked for a giant company which was barely making money YoY but somehow the bonuses for heads were bigger and bigger given a proxy metric related to profit.
There are way too many ways companies arrange to pay themselves and never be profitable to avoid taxes.
3 replies →
You might be underestimating the effect that corporate policies and culture have on the product.
Some teams have a push now to go all in on AI; don't even look at the code. I've seen this in action and the results are probably what you'd expect. Works great at some level, but as complexity accumulates (especially across a team with different "technical vocabularies"), the end result is compounding complexity and mistakes and no person or team knows how the software actually works.
No human testing of software or QA; unit + integration + give AI control over the browser/tool. Yes, this how some teams are moving forward now. So some of this may be that Anthropic's culture will end up causing shifts in how the Bun team operates and thinks.
If this type of culture and mindset becomes the norm, I think either the models have to get a lot better or the software quality is going to decline.
Matt Pocock has a great talk here: https://youtu.be/v4F1gFy-hqg
Once bad code starts to compound on itself, it's going to be really hard to break out of it.
I don't disagree with the notion, but what is up with the dev community championing influencers that work no real jobs and just sell courses where they reread the docs to you at $500 a pop (this gent, $1k a pop)?
5 replies →
> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code. Not for the benefit the JavaScript community at large. Sounds obvious but I guess that has to be pointed out. Outcomes will follow incentives in the long run.
Bun is not a "product" at Anthropic though, it's a tool for its developers to build products. IMO as long as it remains that way, the incentives for its developers will remain fairly aligned with the incentives of people who use it outside the company.
A good example is React. Facebook's interest is that React be performant (website performance is correlated with time spent on said website), reliable (also correlated to time spent), quick to build on (features ship faster) and popular (helps new recruits hit the ground running). That's fairly well aligned with what developers outside of Facebook want too.
Sure, since Facebook's server is written in Hack it means we'll never get a truly full-stack React, and instead we'll need third parties for the back-end (Next.js, Tanstack Start, etc). But Facebook building react also means it will always be someone's job to make sure this Framework works well in codebases with millions of modules.
This is all independent of any shitty practices with their other software. And this has been for decades at this point.
3 replies →
> Anthropic acquired Bun for their own benefit, to protect and grow their investment in Claude Code.
I’m unclear about this. What’s the business case? I use Gemini CLI a lot, which runs on Node, and I can’t see anything that would be improved by using a different JS runtime. It’s not something you notice as a user. Node is mature, stable, and perfectly fit for the purpose.
If Anthropic were public and if these decisions were comprehensible to the average investor, an acquisition like this ought to cause the stock to plummet. Luckily for the people involved, there are no constraints like that in the current market.
This is a good take, and I hope you're right.
One favorable way to phrase it for Anthropic is they acquired Bun because CC and other internal tooling depended on it so heavily and they questioned it's future as purely OSS.
It remains to be seen how things will actually unfold.
Own your supply chain. Reduces risk.
Anthropic bought actual engineers to undo the slop their vibe-coders produce with reckless abandon: https://x.com/jarredsumner/status/2026497606575398987
However, these engineers, too, now start to vibe-code with reckless abandon https://x.com/jarredsumner/status/2048434628248359284 and https://x.com/jarredsumner/status/2049780223311548729
you can own your upstream supply chain while simultaneously being less responsive to user pain points
I disagree with the overall premise: Before the acquisition, GitHub had to figure out how to monetize at some point. Now, even though their parent company does some shitty practices with their other software (Embrace, Extend, Extinguish, MS Windows), it's a stretch to assume this will also translate into making GitHub worse: Being worried makes sense but I remain optimistic about GitHub.
You dropped this: </sarcasm>
I am asuming you are European as I (I live in the Netherlands) have not seen any outage of GitHub as well...
This is a joke, right?
1 9 of uptime later
I think you have some nostalgia about Github's stability before the acquisition.
> it's a stretch to assume this will also translate into making Bun worse
For me it's far from a stretch, in fact it matches closely a pattern that I've seen repeated many times over at this point.
> Now, even though their parent company does some shitty practices with their other software (claude code), it's a stretch to assume this will also translate into making Bun worse: Being worried makes sense but I remain optimistic about Bun.
Can you point to any examples of a company with shitty practices buying one without shitty practices that didn't end up with the shitty practices diffusing through the newly-acquired company within a couple of years?
I'm not the parent poster which is why I still stick to looking at the people...
If you start seeing the people that created bun leaving Anthropic, then I'd probably start to worry. And I haven't seen any sign of that yet.
3 replies →
Funding to pay the core team (via revenue/grants/VC) requires a lot of leadership attention for any independent company that is developing an open-source project as its main activity. Yet more leadership attention goes into other administration (Taxes/hiring/legal/policies/etc.).
I don't have any direct context, though I have run an open-source business (Zulip) for the last decade wearing both the CEO and technical lead hats.
But my simulation is that the Bun leadership team might well be spending 2x as much of their time working on the technology than they reasonably could have as an independent venture-funded company, just because they don't have to do all that other stuff anymore. (There's of course probably a significant bias in that focus towards whatever Anthropic needs from Bun, only some of which other users may care about).
So I agree. Personally, I would not be concerned unless you see the tell-tale signs of the team being reassigned to other priorities at the buyer, which tends to be obvious, because, say, the GitHub project activity falls off a cliff.
Just 3 days later from the blog post, a branch with a potential vibe rewrite from zig to rust surfaced in the bun repo
This looks like a vanity project: the value gained switching from zig to Rust is likely to be negative at best, without even the usual caveat devs use of "learned a new skill".
> I disagree with the overall premise: Before the acquisition, Bun had to figure out how to monetize at some point.
Incidentally, Anthropic needs to figure out how to monetize at some point too.
It’s organizations figuring out how to monetize all the way up.
1 reply →
What came to my mind is Windows.
Regardless of what else is going on, kernel is a separate team, and has very strong incentives to remain competent and sane.
Nope. The need to monotize and the fact that an acquihire cost some money is exactly why relying on a specific runtime is where people should have concern.
Bun has never really been well run. Every feature it had was full of bugs and gaps. And every release fixed a few but broke others.
They released more major features and breaking changes in their last patch release than most software sees in two major versions.
I've been using it just as a script runner and npm package manager basically, and it's incredible the amount of work you have to do to find "good" versions. We've had patch versions suddenly freeze on install more than once, we couldn't upgrade for quite a while due to this. I think they broke postinstall scripts with trustedDependencies entirely two minor versions ago - not a mention in release notes, and somehow no one reporting it in GH issues. In 1.1 or so you could get Bun to do trustedDependency builds in postinstall, and then after that you couldn't. I looked around for release notes and saw nothing mentioned. It's been broken for months.
There's a GitHub issue for the freeze thing. Their security scanner passes the full dep list as CLI arguments, large monorepo on Linux and you blow past ARG_MAX. Spawn silently hangs, no error, --ignore-scripts doesn't help because the scanner is separate from postinstall. Been broken since 1.3.5 at least.
Why people use Deno and Bun over Node? I think it's neat that there are competitors for JS runtimes, but I really don't understand what advantages I'd get by swapping to one of these over Node. Bun has no REPL and worse JS engine, Deno is just Node with a restrictive, annoying permission system and no sqlite. Both claim better performance, but that only seems true in cherrypicked benchmarks, and in my tests (granted about a year ago at this point) both alternatives under-performed Node in my workloads. What am I missing?
EDIT: Actually I just remembered I delivered a small ERP tool to a business a while back and I did opt to use I think Bun for that because it had the most robust tools to wrap a project into an `*.exe`, that was definitely a better experience than Node. Though since that was dependency-less JS I did the whole thing using Node and then just shipped it with Bun.
I switched to Deno because it is the only option out of the 3 that allow monorepo workflow without building .d.ts files. Bun and Node both do type stripping or compiling of TS, but it only works for the entry package of the running script, not any of the linked dependencies from the same repo.
There are still things I dislike about Deno, but it really does make package development a lot simpler. JSR is a great upgrade from NPM, and Deno makes it so simple to publish to both NPM and JSR. Strict IO permission system and WebGPU support are also nice to have.
> wrap a project into an `*.exe`
Deno makes this simple too. Though that's where it's bundling features stop. Honestly I am okay with that, I'd rather use Rolldown or Vite for web or library bundling.
I’ve set up a monorepo before that subpath exported plain TS across packages and it just worked (pnpm). You may want to try again.
Deno has been great for wrapping the dozens of REST API's I need to use in the world in MCP. The no compilation thing means that I can push and it's literally deployed in seconds. I run several dozen of the little servers for various use cases, it's a very cheap way to build an automatable life
> Both claim better performance, but that only seems true in cherrypicked benchmarks, and in my tests (granted about a year ago at this point) both alternatives under-performed Node in my workloads.
1) You need to retest again, mainly because Bun's own native tools should be faster than Node's.
2) My experience is the opposite: For the niche uses I'm on, the rendering process is done 2-3x faster with only a few changes to use Bun's tools.
Started using Bun last year for some quick tests and it ended up fully replacing Node for any new projects after using it for over a decade.
I've reduced my dependencies 5-10x. Got full TS and JSX/TSX support with zero setup. Watch mode is instant. You can deploy a single binary.
I kept waiting for all the breaking issues people complain online but my experience has been nothing but positive.
> Bun has no REPL
Bun has a really nice REPL, can recommend https://bun.com/docs/runtime/repl
I am gonna check my sources better next time lmao, sorry!
The Bun DX is infinitely better than Node's, especially for Typescript projects
In what particular way? I've been using Typescript a lot more recently (unfortunately XD) and I've found the native experience in Node to be totally fine.
4 replies →
I like Deno because there is no "install" step for users, you just run it.
Deno has sqlite: https://docs.deno.com/examples/sqlite/
So does node, since v22: https://nodejs.org/api/sqlite.html
They even added sql template string queries like recent popular libraries in v24.
I just built a project using it.
Ah my mistake, this wasn't the case last I used it, thanks for pointing this out, I checked briefly and referenced stale data.
> Deno is just Node with a restrictive, annoying permission system
I find Deno's permission system amazing! (although I didn't stick with it until v2)
Everything is closed by default but you're able to write code like normal.
Whenever it needs a permission the code pauses (like `debugger;`) and the terminal asks you "hey, should this script have access to this file/folder"?
- You say yes and the code continues (no need for exceptions).
- You say no and the code stops.
Then after your program has run, you put only the answers you said yes to in a deno.json file and it never has to ask again.
---------------------------------------
I'm currently working on a project that takes in heap of files from one one set of devs, processes them with a heap of files from another set of devs, then compiles and outputs the final product.
The file structure goes like this:
1. Group one devs
2. Group two devs
3. Build output
4. Compiler
So group one only works in their folder, and group two only works in their folder, but needs to see group one's folder.
With Deno it's stupidly easy to do stuff like:
- Scripts in group one only have file read access to group one.
- Scripts in group two only have file read access to group one and two.
- Scripts in the compiler only have file read access to group one and two's folders, only have file write access to build-output folder, and can read the env file in the project's root directory.
- One specific file is only allowed to access a specific URL and port
- Another specific file is only allowed to use the FFI to access a specific shared object.
I don't need to worry about a dev's script accidentally using the wrong file because they messed up the path.
I don't need to worry about a dev accidentally overwriting a file and losing data.
I don't need to worry about a dev blindly going down the wrong road because an LLM convinced them to.
I don't need to worry about a dev using LLMs agents that are trying to make the project do something it's not supposed to do.
I don't need to worry about a dev including a dependency that's doing what it shouldn't be doing.
I don't need to worry about the equivalent of `rm -rf ./$BUILD-OUTPUT` but the env file wasn't set up correctly and $BUILD-OUTPUT is empty/undefined evaluating to `rm -rf ./` and nuking the project's root.
I don't need to worry about supply-chain attacks.
I don't need to worry about namesquatting attacks.
There's so many things I don't need to worry about.
It's such a breath of fresh air.
It's just: you guys read from here, other guys read from here, the compiler writes to here.
Whenever something doesn't fit, the program stops and tells you what file is trying to access what permission.
---------------------------------------
aside: Node added a permission system but it's completely broken by design. Everything's open and you have to manually close each permission yourself. Oh, you don't want this project to have file write permissions? Lets just turn off the file write permissions (and forget to also turn off the subprocess permissions to spawn a shell which rm -rf's the wrong folder).
Agreed- node/npm are exceptionally well run and designed. Personally I also prefer the non-TypeScriptiness.
JS people like shiny things.
...try `bun repl` in your terminal
otherwise, bun has a big "batteries included" thing going on.
For instance,
- Bun.$ to run shell commands
- an entire redis client at Bun.redis
There are dozens of other examples like this
For rapid prototyping, complex glue scripts, etc. it's an absolute joy to work with. There is often no reason to pull in any dependencies to accomplish what you want.
I work on Bun, and this post is confusing to me. Me personally and the Bun team continues to dogfood & make Bun better everyday. Our development pace has only gotten faster. Bun's stability has improved significantly since joining Anthropic.
Here are some things shipping in the next version of Bun:
- 17 MB smaller Windows x64 binaries [0]
- 8 MB smaller Linux binaries [1]
- `--no-orphans` CLI flag to recursively kill any lingering processes spawned [3]
- SSL context caching for client TCP & unix sockets, which significantly reduces memory usage for database clients like Mongoose/MongoDB [4]
- Experimental HTTP/3 & HTTP/2 client in fetch [5]
- Experimental HTTP/3 support in Bun.serve() [6]
- Bun.Image, a builtin image processing library [7]
(Along with several reliability improvements to node:fs, Worker, BroadcastChannel, and MessagePort)
The Anthropic acquisition also means Bun no longer needs to become a revenue-generating business. We are very incentivized to make Bun better because Claude Code depends on it, and so many software engineers depend on Claude Code to help get their work done.
[0]: https://github.com/oven-sh/bun/pull/30219
[1]: https://github.com/oven-sh/bun/pull/30098
[2]: https://github.com/oven-sh/WebKit/pull/211
[3]: https://github.com/oven-sh/bun/pull/29930
[4]: https://github.com/oven-sh/bun/pull/29932
[5]: https://github.com/oven-sh/bun/pull/29863
[6]: https://github.com/oven-sh/bun/pull/30032
Acquisitions in this industry tend to lead to a certain inevitable conclusion. The software that has been acquired gets worse as the original team members cash out and their culture is replaced with the culture of the new owner.
Perhaps Bun will be the exception, but you can't say that the concern is unfounded.
The CEO of Anthropic has a habit of making outlandish predictions about how AI is so very close to replacing human programmers. Anthropic has been applying this belief to Claude Code and it has become a giant heap of unmaintainable spaghetti.
Hasn't your team shrunk a lot? Word on the street is that many of Bun's employees left or let go in the time leading up to the acquisition. How many people are left working on Bun?
Has development velocity increased because you are merging large quantities of unreviewed LLM generated code? If so, I would be very worried about future stability if I used Bun.
Saying that you “work on Bun” is such a radical understatement. I have my reservations about Anthropic, but I don’t see how Bun could go wrong with you at the helm. And I’m sure that you are putting the stability and funding of a larger organization to good use :)
I’ve been a Bun maximalist since the beginning. Thank you Jarred!!!
Perhaps it could go wrong because he uses AI robots to generate responses to issues on claude code that are also generated by AI robots? Just bots talking to each other like moltbook. It shows a level of AI maximalism that is absurd, concerning, and funny. But probably par for the course for someone working at Anthropic. I can imagine being surrounded by people doing similarly foolish things only encourages the foolery.
2 replies →
The best feature Bun delivered recently is portable binary. That portability is a huge deal to me as my users are often on ancient Linux distros. Thank you. Both node and deno require recent Linux, more exactly, recent glibc.
I think velocity is a real risk to stability, dogfooding or not. That's what made me swear off the python transformers library. It's doubtful that LLMs will change that calculus for the better.
Hey Jarred, first of all, thanks. I've been doing backend JS since the first release of node and bun is genuinely the first really big improvement in terms of DX. It's an absolute delight to build glue and scripts with... Bun.* just seems to have everything I need. Bun.$ is revolutionary. etc. etc. I'm hoping to run a collection of backend services on it in the near future but it seems the general consensus is that there are still some gremlins holding it back (memory leaks, etc.)
Can you shed a little light on the recent giant rust based commits though? Are you guys moving away from zig? These kind of big curious movements and the spectre of giant LLM-based commits are not exactly confidence inspiring.
[dead]
I just spent a couple hours migrating my knife sharpening website backend from Bun to Node. Feels good to avoid that lock-in. I was initially gung-ho for Bun but increasingly unsure about it. Things I'll miss for sure:
- Querying sqlite with tagged template literals
- Bun.password.verify being argon2 is a better default
- HTML imports
- JSX transpilation
- Auto loading .env file
https://burlyburr.com, which hits https://backend.burlyburr.com
Node supports Querying sqlite with tagged template literals.
https://nodejs.org/api/sqlite.html#databasecreatetagstoremax...
Why not just write a small helper library to add back the features you miss? Node includes SQLite and Argon2 at least, if the issue is the interface then that is easily fixed.
Claude did write me a simple wrapper so I can keep using tagged template literals in the same way with Node.
Node supports auto loading .env and also supports sqlite
Sure, but at least on Node 22 I think I have to pass `--env-file=.env` option to make it pick up .env.
3 replies →
I agree with OP, and understand why to some it feels premature.
We live in a vastly different world than before, where people are more conscious of ethical concerns and willing to stand on their ground to avoid repeating past mistakes.
It might be premature from a tech standard, but it makes sense from an ethical concern. I don't think misconduct is as easily backtracked as it was before and preemptive measures are needed to avoid the large impact that those decisions make.
> where people are more conscious of ethical concerns and willing to stand on their ground to avoid repeating past mistakes
Would be interested to hear what makes you say that. I don't see anyone being conscious of ethical concerns more than they were before. I can see slightly more BDS people, for example, but outside of that not much.
Given the complaints about Firefox and Safari not adopting Chrome OS Platform APIs, and shipping Chrome all over the place, I am not sure about people standing on the ground and ethical considerations.
I don't think bun worked well before the acquisition. Don't get me wrong, i used it all the time for little scripts, but i would never ship a service at work on bun. Between memory issues and incompatibilities that never get fixed, it is a nice toy to me that did a great job of exposing room for improvement in nodejs.
For example, i'd been following this issue https://github.com/oven-sh/bun/issues/14102 and eventually all the libraries shipped "if bun do x" into them, which is the opposite of compatibility.
Yes, I've tried to run it in production on a couple projects. I had to back out from bun to node on both. One, there were huge memory leaks like you mentioned. The other, there were API differences that threw errors, in TextDecoderStream and such. Decided I won't try again until bun v2.
The author closes by enumerating some of the things they like about Bun which are not included in pnpm. The list is basically: native TS support, a vite-style bundler and a vitest/jest style test runner.
Other than a bundler, Node already has all of these. Different test runner syntax maybe but otherwise TS "just works" out of the box and their built in test runner is totally capable. Not sure I see the need for such a lament over Bun.
To be fair, Node didn't have any of these things until Deno & Bun challenged it. Deno didn't seem to move the needle by itself very much for whatever reason, but Bun's existence has had a tangible effect on the Node Technical Steering Committee. I would even argue that much of the current impetus has been driven by Jarred Sumner's savvy social media marketing. It got people talking, and Node is better because of it.
Additionally, Bun's push for covering as much of the Node API as possible has pushed Deno towards the same level of compatibility, and now most code is basically runtime agnostic. I'm not sure if I'll ever actually use Bun in production, but I'm glad it exists because the JavaScript ecosystem has been much improved simply due to its existence.
Reminds me of the back and forth competition between Node.js and io.js that we had to endure back in the day. Worked out for the best in the end.
No disagreement, but this article was posted 2 days ago, the argument isn't relevant right now.
1 reply →
When did Node add native TypeScript? Can you run "node main.ts" directly without any dependencies?
January of last year. Yes.
https://nodejs.org/en/blog/release/v23.6.0
10 replies →
v22.18 promoted type stripping from experimental
Additionally, with Typescript compiler rewrite, it is even less relevant.
Regardless of Anthropic/ClaudeCode, PerryTS[1] looks like a very promising competitor to Bun.
[1]: https://github.com/PerryTS/perry
I would mention deno as the main competitor
Personally I much prefer Deno as it's also doing a lot more work to unify the backend and frontend JS APIs.
1 reply →
I would too ... but not as the winning competitor.
For their first year two of existence, bun tried to do npm, but better. For the first year or two of their existence, Deno tried to reinvent npm.
The key result is that after that first year or two Deno had to walk back their decisions, to create a Node-ecosystem-compatible tool .. and as a result, they're now significantly behind bun (at least by all metrics I've seen).
2 replies →
I would mention Node as the main competitor. It isn't moving as vast as the VC-backed ecosystems are but its future is a lot more assured.
1 reply →
This is cool! But AFAIK bun promises to be a one-stop-shop for all your JS/TS dev needs, while Perry is "just" a compiler from Typescript to native executables.
Looks like AI slop
https://github.com/PerryTS/perry/issues/139
> Good question, and you're basically right — let me show the smoking gun.
:vomit:
1 reply →
the AI replies itt are cringe
7 replies →
Bun is basically a lost case in the same way that claude code is. Many people love both and thats fine as long it is their personal choice. But freedom loving orgs can never ever depend on either for anything ever or build anything on top. The main problem in this picture is projects who don’t understand or remember the internet explorer hell are using bun only features to a degree that you cannot just switch. Even opencode who should have the biggest aversion to depend on anthropic is doing this and it drives me insane. Have you all lost your mind or let your agents blindly lock you into whatever hell wants to own you?
> team ships constantly
Why do people want this? Shipping constantly is how software breaks. You want tools that are good and stable, not constantly churning. I wish software developers would wake up to the idea that velocity is not a marker of quality.
Not shipping enough is how they die, too. It's hard to find a balance.
Not really.
Careful, focused work can easily sustain daily, or almost-daily, shipping. We've been doing it for decades without LLMs.
LLM-brain is pushing people into continuous by the hour shipping and it is absolutely unecessary and creating code at a rate that cannot possibly be kept up with in relation to quality, performance and security.
I don't know, I've been using Claude Code since it came out and it really doesn't seem to be getting worse.
I envy your experience. Its driving me crazy on a near daily basis now.
I'm still getting pretty good code out of it, but I only use it on side projects. Is the issue with their odd limit system?
5 replies →
But are you saying the harness is driving you insane? Or the model? Because Bun is,only the harness, and that part has been improving over time if you stay on the stable channel
3 replies →
[dead]
Does bun have a formal roadmap? I occasionally see some the changes that Jarred posts on X, and I wonder if they're really meaningful or not (perf improvements are always good). It also seems like a lot of the recent contributions are ai authored.
I tried using bun for a project earlier this year and learned that you can't use testcontainers(works fine w/ Deno).
I dont think so, but recent release includes a terminal markdown renderer built-in which means, even if handy, most of the focus is to make Claud Code great. I am not worried though, at least no yet.
A former engineer at Bun said that "there was too much vibe coding, but my opinion wasn't taken into consideration."
https://paperclover.net/q+a/2506010139
> - too much ai chatter. so many examples of it failing to work. ill prove it by showing the most recent ai-generated pull request. yep, it’s failing. i will admit that my feedback on the above items were not very loud, but there has been no attempts to correct this vision.
One thing is sure. Claude has become terrible. Criticize any code Opus 4.7 created and it starts a blame game. Also. It denies that a version 4.7 even exists. Will look into moving back to ChatGPT that I quit because the mandatory spyware bs they added which I believe they nuked.
I still don't think Bun is production ready. We just ripped bun out of a bunch of our production sevices. CPU runaway and memory leaks. All solved by switching back to nodejs.
>Even though I personally am moving some projects away from Bun, don't take my advice as gospel.
Always appreciated nuance.
These are just my opinions man :)
This post seems to "throw doubt" on Bun, based on the OP's experience of Claude Code. But this seems unnecessary indirect. It's not like Bun is hidden software: it's open source and actively developed.
So the more direct question would be: How has Bun actually been since the acquisition?
From what I can tell they have been responding to users as fast as before, and improving the product as well as before.
I made this exact same decisions (bun -> pnpm) for similar reasons, mostly bc I didn’t like how haphazardly a core part of the stack was being vibe coded. Too many changes too quickly for something that’s supposed to be stable
Don't fret; the creator of mise has released a faster alternative: https://github.com/endevco/aube
Big mise fan. Basically took the baton from asdf and added way better performance and dx.
Odd that aube is missing deno from their benchmarks though
"I want a serious Node.js alternative."
Then you could have been using Deno, like many of us, for years.
Wild example as it has been corrupted by VC money as well. I wouldn’t touch deno either.
What do you mean by corrupted?
Been using Deno happily for many years now.
Did I miss anything?
1 reply →
Corrupted how?
Maybe look at https://void.cloud/ (Edit: sorry, meant https://viteplus.dev/, not Void cloud)
They are not a runtime, but they do seem to be interested in wrapping a lot of tools with simple top-level commands
I love coding with bun. It comes with everything.
For my projects I don’t even need any additional dependencies. I use vanilla dom and sqlite
The built-in sqlite and testing functionality is the reason I started using it over pnpm/Node.
Node has both built-in sqlite and testing functionality. Lots of reasons to like bun! But these two are interesting ones...
1 reply →
Why did you have to stop using Cursor? I ask this as someone that uses Cursor, but recently at a conference I heard it referred to negatively several times - but in a very vague sense. I don't really have a dog in the fight, I'm using it because thats what the other dev I work with is using.
There is the SpaceX acquisition rumor, but that's not why.
I only use Cursor through the CLI, and while the UX of the CLI is pretty bad, I've found their harness (the prompts they use and orchestration of LLMs) to be nothing short of incredible. I can't comment on their agent development environment given I haven't spent a lot of time with it.
The reason I'm moving away from Cursor is cost. Unfortunately, if you want to use the SOTA models from both OpenAI and Anthropic you basically have to go direct through their subsidized plans.
I agree with your assessment that the harness is incredible and so I get a ton of mileage out of Auto + Composer 2. This is my workhorse.
Admittedly, with Opus 4.6+, GPT 5.5 I just haven't used them much and as I gain more experience I can see what the hype is all about. But to me, the answer isn't $200 max plan, it's bifurcating the work. Call me a spendthrift!
I personally switched back to vscode as I started using Claude and Opencode more for the AI flow, and I didn't see much added value any longer. Also, I was incredibly frustrated that they decided to hide the close button and finally, there were weird issues with editor groups spawning at unwanted times. They might be able to fix it, but I felt that they were starting to reach the limits of what you can do with a "live fork".
The main complaint about Cursor I see online is that it's expensive.
Otherwise if you are looking for and IDE first approach with great AI integration it's the best product out there. I prefer it over CC/Codex.
just conjecture but possibly because of the rumored acquisition plans from SpaceX (that's why i stopped using it)
ah ok, yeah that would give me pause as well.
That's a lot of very large jumps to come to the conclusion that Bun isn't going to turn out well.
OpenAI and Anthropic both are destined to doom for sure. There's no way around it and it is all in the math. Bun would be a causality. It is only a matter of time.
Only company that would survive the AI race - the one where the current wave was actually invented along with the research paper, the libraries and even specialised hardware: Google.
Google has a serious problem with its product management culture (long list of products and projects, people even skeptical of Flutter) otherwise they would have surpassed Anthropic long ago.
Google seems profoundly uninterested in the agentic coding world though. gemini-cli is underwhelming, Antigravity not super compelling, and the Gemini model itself absolutely terrible and non-competitive in basic tool use necessary for coding, even inside their own harnesses.
It's fine for other purposes though. Which are arguably a much larger and lucrative market.
That's assuming Google can outpace Nvidia, which may or never will. Nvidia is not just going to sleep on it.
Doesn't seem that bad if you're convinced they're the only viable market dominator.
TBF, I really haven't done much of anything with Bun other than occasional module testing. I mostly use Deno for my day to day, including a lot of shell scripts the past few years. I liked the newer ergonomics a lot, direct module references in repositories is really nice for shell scripts.
That said, I'm worried about them having good enough monetization while keeping features open... or at least able to be replicated by others. So I can understand some of the concerns.
Bun is basically a wrapper over JSCore. I don't think it's that big of a feat. Furthermore they are heavily invested in vendor specific APIs which I think is not good.
Using that logic, Node.js is just a wrapper over V8.
Indeed it is
1 reply →
This isn't anything new and I feel the same way about Deno. We can argue about exactly how much trouble any runtime is in today vs yesterday vs tomorrow but VC funding of a javascript runtime feels inherently unstable to me.
The key question is how much unique tooling you're relying on. If you can switch to Node tomorrow, great. If you can't, make sure you have a contingency plan.
The problem is this.
If not VC funding, then what? Volunteer work? So other people can make money off it?
Our industry has no answer how to fund infrastructure.
You've got FAANG companies using open source projects built by volunteers and doing meagre grants every once in a while, not nearly enough to pay a SWE salary. A smattering of hard to get grants from NLnet, etc. And then places like Anthropic or Grok or OpenAI "buying" open source teams to pull them inside, which inevitably leads to drama.
I don't know what the answer is, but there's a serious issue here. Similar situations in the 80s were why the FSF was founded and the GPL established. (Not to fund, but to protect the rights of authors and users)
Perhaps runtimes used to make companies profit shouldn't be free if we're not funding those services in a way we do other public services (taxes)?
I never thought I would come around to the Java way of thinking on this, but companies are abusing the public good.
pnpm is even worse. There is no way to bootstrap it without binary blobs making it an easy target supply chain attack waiting to happen that could hide in plain sight indefinitely.
Do you use Gentoo as OS?
I did for over a decade, but it does not go far enough with supply chain security.
I bootstrapped a new generation of Linux distribution from 180 bytes of human readable x86 machine code all the way up.
https://stagex.tools
2 replies →
I think the motion that Claude Code and Anthropic has is trying to force-hide stuff from you. Some hopefully remember the shitstorm that happened, when they changed. Reading xxx.yy to reading 1 file or reading 2 files.
More changes like this came and they were not or very hard to configure. I understand the business idea behind it. Make them to use AI as much as possible, get the human out of the loop. More training data. More Token Usage JUHUU.
However I think that made Claude Code so much worse and so much more untrusthworthy. It’s a sneaky attempt to take away the driving wheel from you. And if you follow that logic, way more and way more things seem reasonable.
But mainly for now it just generated a lot of distrust for me
Nothing to worry about anymore, the ship has sailed the moment it was acquired.
Aube[0] seems interesting to me, I have submitted it as show HN after hearing about your post. Its created by the same person who has made mise and I actually discovered it when I was browsing through on mise.en.dev website
I still use bun, but I think that there are some other pathways so I am not that worried about myself personally. But that's also because I most often than not code in golang rather than typescript/javascript
[0]: https://aube.en.dev/
Ugh. I hate these "guilt by association" hit pieces. Nothing is wrong and yet we must signal our virtue.
Might as well just open our pants and wave our wangers, hoping for a better world
If Disney acquired your favorite IP, wouldn't you get worried?
Yes, but I wouldn't stop watching the originals. And I wouldn't stop watching anything until I saw what came out.
What is there to worry about? If we believe AI crowd, Bun and entire JS ecosystem is done for. Dead. Nothing to worry about since nothing's left.
If as claimed everyone and his malnourished cellar rat can whip up a SaaS on a whim, then why that SaaS should be built upon chromium+js+http instead of tcp+native ui?
Remember, choice of ui is no longer a constraint. Nothing is a constraint or so they say.
So it follows that all this javascript stuff can at last die.
vite and it's ecosystem is actually becoming the unified toolchain with vite+. IIRC pnpm will also be the preferred package manager in the tool
There's a VC behind that too.
It has eject functionality and all the subcomponents are independent open source projects.
I still see no monetization with Bun and Deno to keep them going.
You see this all over the place with other programming languages.
The ones that have bleeding edge features do so, because there are companies, or universities (for their PhD and Msc thesis), that invest into those ecosystems.
In the end nodejs will keep improving, with Microsoft and Google's baking, and that will be it.
Why do people use bun? Would like an answer from an actual experienced / staff tier or higher engineer.
Simplicity.
bun file.ts
And it’s been this way for years.
Don’t care about what’s in a package.json file or if there is one. Can do this without tsconfig file as well.
Also this works on node too now, don't sleep on node improvements
How is that better than Deno?
Really? This was like a intern level response lol
I'm no staff+, but the tooling (package manager and TypeScript runner) is very fast.
bun run is <1s for my projects, while watching for file changes. So the iteration speed is quite pleasant.
All these complaints about Claude code are mostly resolved if you pay for your usage with direct API pay as you go. It’s not cheap but nearly all the complaints I see about Claude code are due to the fact the subscription plans seem unsustainable from a cost perspective.
I use the subscription and so far have had no problems.
I use Bun and I'm concerned too but it's still too early to tell.
Personally my experience with Bun has been 100% positive so far.
I'm aware full Node support is not there yet and may never happen but with dependencies that support Bun it's been a smooth ride for me.
> But from the outside, Claude Code looks like a tool moving in the wrong direction. More restrictions, billing weirdness, surprise behavior based on text in commits. That is textbook enshittification.
I've never used Claude Code, but this person doesn't understand what "textbook enshittification" means. "Enshittification" is a feature of certain kinds of business models, progressing through the following stages:
1. Giving away a product free to users, subsidized by venture capital, to gain a monopoly
2. Switching to advertising, then abusing users on behalf of the real customers, advertisers
3. Using monopoly power to abuse real customers (advertisers) to extract as much money as possible
Anthropic's business model doesn't have a "user / customer" dichotomy; their paid users are their customers. And they don't have a monopoly they can use to extract money yet.
ETA: In other words, "Enshittification" isn't just random; you're making the user experience worse in order to make advertiser experience better; and then making advertiser experience worse in order to extract maximum profit. The only complaint that could vaguely be related to profit is the OpenClaw stuff, and that's entirely due to trying to keep the "all-you-can-eat" model for non-OpenClaw users, rather than having to switch everything to metered.
Mostly in my day to day routine, where is use Claude Code maybe 90% of the time, I don’t see that it’s become that bad. Yes they’ve made some questionable decisions on API usage and OpenClaw but I feel like this post is making it out to be worse than it is.
That being said I’ve been worried about the future of Bun anyway. Especially if the AI bubble pops. Then again, it’s open source.
> Will we see issues start popping up in Bun that make it seem like the team doesn't even dogfood their own product? I don't know, but I'm not sure I want to continue using it just in case.
I sympathize with the general premise. The reaction to move away seems pre-mature though.
It sounds like `bun` is still performing just as well as before, and this sentiment isn't based on concrete changes. I also wouldn't expect infrastructure like `bun` to evolve in the way a consumer-facing product, especially one scaling as quickly as Claude Code, can.
Disagree, you definitely don’t want to be looking back saying “hm I knew it, I saw the signs, should have trusted myself”
Plus it’s not a huge lift right now
Genuine question: why not just wait?
If Bun stays great, you saved yourself some time for switching, and got to keep using Bun.
If Bun worsens, you spend the same time for switching, just moved a bit later, and got to use Bun for a little longer.
1 reply →
Are Bun and Deno in the room with us right now?
I’m confident that any unhappiness with Claude Code is at least 95% downstream of Anthropic seeing demand scale their revenue by ~3X in 6 months from a $multi-billion annual base.
Their product focus, roadmap, or execution is likely a rounding error in the face of that tsunami.
Frankly, it’s shocking they’re doing so well relative to, say, GitHub.
So who controls NodeJS? https://openjsf.org/governance has Microsoft as the chair. And Microsoft owns npm. It's kinda hard to avoid a corp controlling these tools.
The author seems more focused on the thing where Anthropic fights OpenClaw usage unless you have the right billing set up for that. Frankly I just don't care about those complaints, all the LLM services want you to set up a non-subsidized billing method to use OpenClaw because it uses lots of tokens. It doesn't mean they're going to crap on Bun.
The only reason I don't use Bun is I never ran into a situation where Node didn't cut it. Even though my least favorite tech corp controls Node.
Bun does great on their own benchmarks.
+1 I'll stick with pnpm for now
I used to be a fan of Bun, but the way it keeps adding bloat makes me seriously doubt its future. Also, it seems like they are doing a lot of vibe coding without taking enough time, which raises other questions.
Node.js is also more stable, and it has started supporting TypeScript out of the box. I don’t think Bun will have many advantages after Node 26.
> and it has started supporting TypeScript out of the box
Node only does type stripping though. If you want proper TS support you still need a compiler.
> I don’t think Bun will have many advantages after Node 26
There are tons of advantages. For instance, Bun includes a lot of features that would need a third party dependency in Node: db driver, S3 client, watch mode, bundler, JSX support, etc.
Why would you want DB drivers and S3 clients in your runtime? That’s exactly what 3rd parties are for, you don’t want to have to update your runtime for a new version of your drivers
3 replies →
They know the problem. It's better than they don't.
"Friendship ended with Bun, now pnpm is my best friend" the post...
I wonder why Anthropic chose to spend money on Bun when they could have easily spend that resource on Go which is fairly easy to use and fast. I'm sure their SWEs could easily everything things in Go. Anyone have insight on why?
If I had to guess, it comes down to speed of iteration. Claude Code is built on JavaScript, so Bun aligns well with their current stack.
Switching to Go or Rust would only make sense if performance were the main priority, which doesn’t seem to be the case. Their current setup lets them ship quickly. A rewrite in Go would likely slow that down.
Codex moved to Rust, and you can see the trade-off. Performance improved, but release velocity dropped. They’re also still catching up to Claude Code, so they don’t face the same pressure to ship as fast.
JS is used because it's (still) the only code you can run in a browser. Although node and bun are regular OS processes, their use/popularity traces back to that browser environment one way or another.
Claude is still better at writing JS than it is Go.
My guess: JavaScript runs in the Browser as well as on the OS. That way you can train a model to be able to interact with both fairly simple. You can also see that their harness, claude-code is also written in js. So I guess they are quite invested in that language anyway.
Yeah, it's the same pattern you saw in the early react days where open source devs would try to "woo" the react core team into getting recognition to sell consulting services or courses.
The bun people likely have some fucked up incestial business relationship with some >dev manager at Anthropic and the same pattern is repeating. Only this cycle it's going straight to acquisitions, which honestly seems like a worse strategy and Anthropic will def can the bun engineers in less than <3 years or whenever they face an actual budget crunch that they can't stave off with more gulf money.
I’m wondering why Anthropic, who has “the most powerful, hold me bro, AI in the world” just didn’t vibe code their own, better, version of bun? haven’t Dario said that coding is cooked in 6 month, like 12 months ago?
Is Claude better with Javascript than it is with Go code? Seems like it could be true.
Problem with Go is the type system is rudimentary, so you can't "restrict" AIs as well as you could in Typescript.
I don’t believe so, Go has simple rules, snd in my experience Claude is excellent at writing all the boilerplate needed
I doubt those SWEs could have used anything other than JS.
Ironic that this comment is in thread advocating for usage of Go:
"The key point here is our programmers are Googlers, they’re not researchers. They’re typically, fairly young, fresh out of school, probably learned Java, maybe learned C or C++, probably learned Python. They’re not capable of understanding a brilliant language but we want to use them to build good software. So, the language that we give them has to be easy for them to understand and easy to adopt. – Rob Pike"
1 reply →
One of them is a much more efficient but obscure programming language from a competitor, the other is what the web is built on.
In what world is Go an obscure programming language??
3 replies →
Umm, just use Deno? Everything author seems to love about Bun exists in Deno.
Seriously. This baffling omission undermines the credibility of the whole thing.
"I want a serious Node.js alternative."
So you ignore the one developed by the same guy?
Claude is currently unusable for me on Windows because bun keeps crashing
:(
I’m even more worried after reading this: https://news.ycombinator.com/item?id=48016880
So Bun is going to become a fully vibe-coded codebase, with important details lost in translation.
I’ve been a huge supporter of Bun, but now I’d be extremely reluctant to deploy it in production.
It’s also a bit disappointing to see Jared change his mind so quickly. He’s an incredible developer with deep knowledge of how to write clean, maintainable, efficient code. But now it feels like his talent is being sidelined, and Claude has been given full control over the codebase.
Claude Code itself seems to be built that way: they keep piling on new features every day, but it has become this big, bloated Frankenstein slug.
Bun used to be a small, elegant, clean codebase. Now I’m worried it may turn into an unreliable mess.
Let them cook. Anything that they can do to get rid of the absolute hell that is dependencies in the JS ecosystem is worthwhile. I really don't care what they add as long as it's maintained
I see the word “enshittify” being thrown around casually about Claude Code. We’re far from that part of the Enshittification cycle still. This is just a mismanaged product and the result of an extremely competitive market that moves too fast.
Never attribute to malice that which can be adequately explained by incompetence, etc.
Their third part harness move seems like more than incompetence.
Yeah, I'm none too happy with anthropic right now, but what's happening to Claude code is just your typical garden variety mismanagement of a project that grew way too fast for its owners to reasonably handle.
This is all so speculative and whatevs
"I have a vague concern, so I'm now using a shittier toolchain. You shouldn't do it though." is a weird post format.
> That is textbook enshittification.
Technically, no, not textbook enshittification. Enshittification was originally meant to refer to companies squeezing two-sided markets, not products just getting kinda worse.
Personally, I suspect that Bun is a Silicon Valley attempt to lock some companies into its stack (similar to what cloud providers, Next.js + Vercel do). Especially now that Anthropic has become an owner, I'll be keeping Bun at a considerable distance.
The funniest part to me is that 10–15 years ago, companies were stuck in the development process due to binary (closed) dependencies. Now they're jumping into the same trap under a different name.
Maybe I’ve missed some scandals, but so far OpenJS Foundation is the best thing that has happened for the JavaScript ecosystem.
Ray Bradbury foresaw this.
The term “enshittification” really ruins (one might say “enshittifies”) any article it’s in.
Millenials of the redditor class desperately need a moratorium on the word enshittifying.
What an utterly baffling post. Moving away from a tool that you love proactively because you're concerned it might degrade in quality at some time in the future? Ok man whatever.
The issues with Claude Code lately look to me like symptoms of being part of a service that is experiencing insane growth (fastest growth in history, by far [1]), while being severely constrained on adding capacity (GPUs are hard to get quickly right now, even if you have the money). I assume they're constantly fighting fires trying to keep the core use cases of Claude Code working, even if that means limiting OpenClaw usage in somewhat draconian ways.
It's annoying, but I don't see this as a bad thing at all for Bun.
[1] https://www.axios.com/2026/04/13/anthropic-revenue-growth-ai
No, all the issues are symptoms of trying to slop-code a functional product. Anthropic has admitted they dogfood heavily, and issues like [1] from the article could only be caused by a text generator.. I refuse to believe Anthropic employees are that stupid.
[1] https://youtu.be/J8O9LLpJNrg?t=1201
[dead]
Here's how I evaluate whether I'm going to use a given bit of OSS:
- Is the project important to me or can I replace it? If the latter, I'm more likely to allow failures of other criteria. If not, I need to be more strict. Bun is easy enough to replace if something were to happen to the project. Easy come, easy go.
- Are there any red flags in financing that could become problematic? Many VC funded OSS companies fail this test for me. What happens when they don't make it? What happens post IPO if they do? What happens when they get acquihired? Mostly that's up to share holders, not developers. Most VC funded companies actually don't make it and that's normal in the VC world. A few companies make it, everything else fails quickly. And there are a few examples of projects that have changed licenses under pressure of shareholders. That's why this is a red flag to me. I've used Redis and Elasticsearch, for example. And I switched away from Mongo before they changed the license. I used Terraform before they open sourced. All negative examples here.
Bun initially wasn't great on this. But the Anthropic acquisition has improved things a bit. It's still a risk. But it's unlikely they have any plans for Bun other than just keeping it alive by employing the main people working on it. Anthropic itself might still fail of course.
- Has the project been around for a long time. If so, it likely has a stable community and funding. There are no guarantees but the older the better. Bun is pretty newish still.
- Is the project stable and under active development? If it's stable because nobody makes changes anymore that's usually not a great sign. If it is stable despite a lot of active development, that's really positive. It means somebody competent is in charge. Bun seems pretty good on that front.
- Is the project otherwise structured right to be future proof. For me future proof is a combination of contributor community, commercial activity, and licensing. The more diverse the contributor community the better. If there are multiple companies sponsoring and making money of a project, that makes it less likely that a single one can hijack it for their own good. This is more common with permissively licensed software (but there are exceptions). Bun doesn't have much commercial activity around it and the regular contributor community is tiny. One person seems to be doing most of the work with only a handful of notable other contributors that are probably all Anthropic employed at this point. Out of these, the dependence on a single person looks the most problematic to me.
So, the overall score for bun is not perfect (there a few potential red flags) but I'm happy to risk using it because it's not that critical to me and easily replaceable.
My read of the whole Anthropic acquihire is that it is an improvement over the starting point which was a VC funded company that was probably going to fail otherwise. Otherwise, good tech and generally nice to use. I could see Anthropic going bad and this project surviving in one form or another. So, that doesn't have to be a show stopper.
has anyone forked bun?
Just look at the "new" documentation. It's full on AI slop.
Is there actual evidence coming from the Bun project itself?
Otherwise it's just FUD.
>Claude Code appears to be enshittifying
AI tech bros when the bubble is blowing up lol. LLMs where always bad and unreliable for programming due to their un-deterministic nature. And regarding pricing policies, yeah, they have to make a profit somehow, it's just that the tech is too expensive vs the actual (almost non-existent) ROI it generates. Let's see if people are willing to pay when these guys aren't subsidized by the delusional VCs anymore...
test
what a nice way to write an article!
[dead]
[flagged]
[dead]
TLDR;
> Claude Code appears to be enshittifying. So now I have to worry that Bun could enshittify too
tl;dr: I have concerns. Not because Bun is bad. Bun is great. It is not bad. But Claude was good. And now it is bad. Bun is owned by Anthropic. Transitive property. Maybe. I hope I’m wrong.
Look at them! They're like loaves of bread that hop.