The attempts at collaborative tools in Zed was always far more interesting to me than the AI stuff. Don't get me wrong, their AI stuff is nice and works well for me, but it's hardly necessary in an editor with how good Claude Code and others are.
But the times I've used the collaboration tooling in Zed have been really excellent. It just sucks it's not getting much attention recently. In particular I'd really like to see some movement on something that works across multiple different editors on this front.
I'm glad to hear they're still thinking about these kind of features.
The choice to go to WebAssembly is an interesting one.
WASM3, especially (released just 2 months ago), is really gunning for a more general-purpose "assembly for everywhere" status (not just "compile to web"), and it looks like it's accomplishing that.
I hope they add some POSIXy stuff to it so I can write cross-platform commandline TUI's that do useful things without needing to be recompiled on different OS/chip combos (at the cost of a 10-20% reduction from native compilation- not a critical loss for all but the most important use-cases) and are likely to simply keep working on all future OS/chip combos (assuming you can run the wasm, of course)
How is Rust + Web Assembly + Cloudflare workers in pricing and performance compared to say deploying Rust-based Docker images on Google Cloud Run or AWS Fargate?
I think performance takes a hit due to WASM, and I imagine pricing is worse at big qps numbers (where you can saturate instances), but I've found that deploying on CF workers is great for little-to-no devops burden. Scales up/down arbitrarily, pretty reasonable set of managed services, no cold start times to deal with, etc.
Only issue is that some of the managed services are still pretty half-baked, and introduce insane latency into things that should not be slow. KV checks/DB queries through their services can be double-to-triple digit ms latencies depending on configs.
Been using CF Workers with JavaScript and I absolutely love it.
What is performance overhead when comparing rust against wasm?
Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.
> Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.
Supabase Edge Functions runs on the same V8 isolate primitive as Cloudflare Workers and is fully open-source (https://github.com/supabase/edge-runtime). We use the Deno runtime, which supports Node built-in APIs, npm packages, and WebAssembly (WASM) modules. (disclaimer: I'm the lead for Supabase Edge Functions)
It surely depends on your use case. Testing my Ricochet Robots solver (https://ricochetrobots.kevincox.ca/) which is pure computation with effectively no IO the speed is basically indistinguishable. Some runs the WASM is faster sometimes the native is faster. On average the native is definitely faster but it is surprisingly within the noise.
Last time I compared (about 8 years ago) WASM was closer to double the runtime. So things have definitely improved. (I had to check a handful of times that I was compiling with the correct optimizations in both cases.)
The stats I've seen show a 10-20% loss in speed relative to natively-compiled, which is effectively noise for all but the most critical paths.
It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/
Unfortunately 64bit address suppport does the opposite, that comes with a non-trivial performance penalty because it breaks the tricks that were used to minimize sandboxing overhead in 32bit mode.
Workers is a v8 isolates runtime like Deno. v8 and Deno are both open source and Deno is used in a variety of platforms, including Supabase and ValTown.
It is a terrific technology, and it is reasonably portable but I think you would be better using it in something like Supabase where are the whole platform is open source and portable, if those are your goals.
i didn't realize the cloud side of an editor had grown to ~70k lines of Rust already… and this work is laying the foundation for collaborative coding with DeltaDB.
BUT it's worth noting that WebAssembly still has some performance overhead compared to native, the article chooses convenience and portability over raw speed, which might be fine for an editor backend.
The attempts at collaborative tools in Zed was always far more interesting to me than the AI stuff. Don't get me wrong, their AI stuff is nice and works well for me, but it's hardly necessary in an editor with how good Claude Code and others are.
But the times I've used the collaboration tooling in Zed have been really excellent. It just sucks it's not getting much attention recently. In particular I'd really like to see some movement on something that works across multiple different editors on this front.
I'm glad to hear they're still thinking about these kind of features.
The thing that made me go "oh damn" was finding out the debugger is multiplayer.
The choice to go to WebAssembly is an interesting one.
WASM3, especially (released just 2 months ago), is really gunning for a more general-purpose "assembly for everywhere" status (not just "compile to web"), and it looks like it's accomplishing that.
I hope they add some POSIXy stuff to it so I can write cross-platform commandline TUI's that do useful things without needing to be recompiled on different OS/chip combos (at the cost of a 10-20% reduction from native compilation- not a critical loss for all but the most important use-cases) and are likely to simply keep working on all future OS/chip combos (assuming you can run the wasm, of course)
> I hope they add some POSIXy stuff to it
Are you aware of WASI? WASI preview 1 provides a portable POSIXy interfance, while WASI preview 2 is a more complex platform abstraction beast.
(Keeping the platform separate from the assembly is normal and good - but having a common denominator platform like POSIX is also useful).
How is Rust + Web Assembly + Cloudflare workers in pricing and performance compared to say deploying Rust-based Docker images on Google Cloud Run or AWS Fargate?
I think performance takes a hit due to WASM, and I imagine pricing is worse at big qps numbers (where you can saturate instances), but I've found that deploying on CF workers is great for little-to-no devops burden. Scales up/down arbitrarily, pretty reasonable set of managed services, no cold start times to deal with, etc.
Only issue is that some of the managed services are still pretty half-baked, and introduce insane latency into things that should not be slow. KV checks/DB queries through their services can be double-to-triple digit ms latencies depending on configs.
I wish Zed would implement support for Jupyter notebooks first. Maybe this sounds like a thing I can contribute
Been using CF Workers with JavaScript and I absolutely love it.
What is performance overhead when comparing rust against wasm?
Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.
> Also think the time for a FOSS alternative is coming. Serverless without, virtually, cold starts is here to stay but being tied to only 1 vendor is problematic.
Supabase Edge Functions runs on the same V8 isolate primitive as Cloudflare Workers and is fully open-source (https://github.com/supabase/edge-runtime). We use the Deno runtime, which supports Node built-in APIs, npm packages, and WebAssembly (WASM) modules. (disclaimer: I'm the lead for Supabase Edge Functions)
It surely depends on your use case. Testing my Ricochet Robots solver (https://ricochetrobots.kevincox.ca/) which is pure computation with effectively no IO the speed is basically indistinguishable. Some runs the WASM is faster sometimes the native is faster. On average the native is definitely faster but it is surprisingly within the noise.
Last time I compared (about 8 years ago) WASM was closer to double the runtime. So things have definitely improved. (I had to check a handful of times that I was compiling with the correct optimizations in both cases.)
The stats I've seen show a 10-20% loss in speed relative to natively-compiled, which is effectively noise for all but the most critical paths.
It may get even closer with WASM3, released 2 months ago, since it has things like 64 bit address support, more flexible vector instructions, typed references (which remove runtime safety checks), basic GC, etc. https://webassembly.org/news/2025-09-17-wasm-3.0/
Unfortunately 64bit address suppport does the opposite, that comes with a non-trivial performance penalty because it breaks the tricks that were used to minimize sandboxing overhead in 32bit mode.
https://spidermonkey.dev/blog/2025/01/15/is-memory64-actuall...
The Cloudflare Workers runtime is open source: https://github.com/cloudflare/workerd
People can and do use this to run Workers on hosting providers other than Cloudflare.
In code I’ve worked on, cold starts on AWS lambda for a rust binary that handled nontrivial requests was around 30ms.
At that point it doesn’t really matter if it’s cold start or not.
Workers is a v8 isolates runtime like Deno. v8 and Deno are both open source and Deno is used in a variety of platforms, including Supabase and ValTown.
It is a terrific technology, and it is reasonably portable but I think you would be better using it in something like Supabase where are the whole platform is open source and portable, if those are your goals.
Workerd is already open source so that's a good start.
Post is a bit sparse on details and seems to be more about the backend than the infra itself. Would be interested to hear more.
i didn't realize the cloud side of an editor had grown to ~70k lines of Rust already… and this work is laying the foundation for collaborative coding with DeltaDB.
BUT it's worth noting that WebAssembly still has some performance overhead compared to native, the article chooses convenience and portability over raw speed, which might be fine for an editor backend.