I'm glad to see this. I'm happy to plan to pay for Zed - its not there yet but its well on its way - But I don't want essentially _any_ of the AI and telemetry features.
The fact of the matter is, I am not even using AI features much in my editor anymore. I've tried Copilot and friends over and over and it's just not _there_. It needs to be in a different location in the software development pipeline (Probably code reviews and RAG'ing up for documentation).
- I can kick out some money for a settings sync service.
- I can kick out some money to essentially "subscribe" for maintenance.
I don't personally think that an editor is going to return the kinds of ROI VCs look for. So.... yeah. I might be back to Emacs in a year with IntelliJ for powerful IDE needs....
I'm happy to finally see this take. I've been feeling pretty left out with everyone singing the praises of AI-assisted editors while I struggle to understand the hype. I've tried a few and it's never felt like an improvement to my workflow. At least for my team, the actual writing of code has never been the problem or bottleneck. Getting code reviewed by someone else in a timely manner has been a problem though, so we're considering AI code reviews to at least take some burden out of the process.
AI code reviews are the worst place to introduce AI, in my experience. They can find a few things quickly, but they can also send people down unnecessary paths or be easily persuaded by comments or even the slightest pushback from someone. They're fast to cave in and agree with any input.
It can also encourage laziness: If the AI reviewer didn't spot anything, it's easier to justify skimming the commit. Everyone says they won't do it, but it happens.
For anything AI related, having manual human review as the final step is key.
IMO, the AI bits are the least interesting parts of Zed. I hardly use them. For me, Zed is a blazing fast, lightweight editor with a large community supporting plugins and themes and all that. It's not exactly Sublime Text, but to me it's the nearest spiritual successor while being fully GPL'ed Free Software.
I don't mind the AI stuff. It's been nice when I used it, but I have a different workflow for those things right now. But all the stuff besides AI? It's freaking great.
I found the OP comment amusing because Emacs with a Jetbrains IDE when I need it is exactly my setup. The only thing I've found AI to be consistently good for is spitting out boring boilerplate so I can do the fun parts myself.
I always hear this "writing code isn't the bottleneck" used when talking about AI, as if there are chosen few engineers who only work on completely new and abstract domains that require a PhD and 20 years of experience that an LLM can not fathom.
Yes, you're right, AI cannot be a senior engineer with you. It can take a lot of the grunt work away though, which is still part of the job for many devs at all skill levels. Or it's useful for technologies you're not as well versed in. Or simply an inertia breaker if you're not feeling very motivated for getting to work.
Find what it's good for in your workflows and try it for that.
Highlighting code and having cursor show the recommended changes and make them for me with one click is just a time saver over me copying and pasting back and forth to an external chat window. I don’t find the autocomplete particularly useful, but the inbuilt chat is a useful feature honestly.
I'm the opposite. I held out this view for a long, long time. About two months ago, I gave Zed's agentic sidebar a try.
I'm blown away.
I'm a very senior engineer. I have extremely high standards. I know a lot of technologies top to bottom. And I have immediately found it insanely helpful.
There are a few hugely valuable use-cases for me. The first is writing tests. Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases, and even find bugs in your implementation. It's goddamn near magic. That's not to say they're perfect, sometimes they do get confused and assume your implementation is correct when the test doesn't pass. Sometimes they do misunderstand. But the overall improvement for me has been enormous. They also generally write good tests. Refactoring never breaks the tests they've written unless an actually-visible behavior change has happened.
Second is trying to figure out the answer to really thorny problems. I'm extremely good at doing this, but agentic AI has made me faster. It can prototype approaches that I want to try faster than I can and we can see if the approach works extremely quickly. I might not use the code it wrote, but the ability to rapidly give four or five alternatives a go versus the one or two I would personally have time for is massively helpful. I've even had them find approaches I never would have considered that ended up being my clear favorite. They're not always better than me at choosing which one to go with (I often ask for their summarized recommendations), but the sheer speed in which they get them done is a godsend.
Finding the source of tricky bugs is one more case that they excel in. I can do this work too, but again, they're faster. They'll write multiple tests with debugging output that leads to the answer in barely more time than it takes to just run the tests. A bug that might take me an hour to track down can take them five minutes. Even for a really hard one, I can set them on the task while I go make coffee or take the dog for a walk. They'll figure it out while I'm gone.
Lastly, when I have some spare time, I love asking them what areas of a code base could use some love and what are the biggest reward-to-effort ratio wins. They are great at finding those places and helping me constantly make things just a little bit better, one place at a time.
Overall, it's like having an extremely eager and prolific junior assistant with an encyclopedic brain. You have to give them guidance, you have to take some of their work with a grain of salt, but used correctly they're insanely productive. And as a bonus, unlike a real human, you don't ever have to feel guilty about throwing away their work if it doesn't make the grade.
AI is solid for kicking off learning a language or framework you've never touched before.
But in my day to day I'm just writing pure Go, highly concurrent and performance-sensitive distributed systems, and AI is just so wrong on everything that actually matters that I have stopped using it.
zed was just a fast and simple replacement for Atom (R.I.P) or vscode. Then they put AI on top when that showed up. I don't care for it, and appreciate a project like this to return the program to its core.
Just to echo the sentiment, I've had struggles trying to figure out how to use LLMs in my daily work.
I've landed on using it as part of my code review process before asking someone to review my PR. I get a lot of the nice things that LLMs can give me (a second set of eyes, a somewhat consistent reviewer) but without the downsides (no waiting on the agent to finish writing code that may not work, costs me personally nothing in time and effort as my Org pays for the LLM, when it hallucinates I can easily ignore it).
If it's formulaic enough, I will use the editor templates/snippets generator. Or write a code generator (if it involves a bunch of files). If it's not, I probably have another class I can copy and strip out (especially in UI and CRUD).
> integrate module A into module B
If it's cannot be done easily, that's the sign of a less than optimal API.
> entire codebase A into codebase B
Is that a real need?
> get someones github project up and running on your machine, do you manually fiddle with cmakes and npms
If the person can't be bothered to give proper documentation, why should I run the project? But actually, I will look into AUR (archlinux) and Homebrew formula if someone has already do the first jobs of figuring dependency version. If there's a dockerfile, I will use that instead.
> convert an idea or plan.md or a paper into working code?
Iteratively. First have an hello world or something working, then mowing down the task list.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
Either the test is wrong or the code is wrong. Figure out which and rework it. The figuring part always take longer as you will need to ask around.
> If you do all this manually, why?
Because when something happens in prod, you really don't want that feeling of being the last one that have interacted with that part, but with no idea of what has changed.
To me, using AI to convert an idea or paper into working code is outsourcing the only enjoyable part of programming to a machine. Do we not appreciate problem solving anymore? Wild times.
I'm pretty fast coding and know what I'm doing. My ideas are too complex for claude to just crap out. If I'm really tired I'll use claude to write tests. Mostly they aren't really good though.
AI doesn't really help me code vs me doing it myself.
this is something i've found LLMs almost useless at. consider https://arxiv.org/abs/2506.11908 --- the paper explains its proposed methodology pretty well, so i figured this would be a good LLM use case. i tried to get a prototype to run with gemini 2.5 pro, but got nowhere even after a couple of hours, so i wrote it by hand; and i write a fair bit of code with LLMs, but it's primarily questions about best practices or simple errors, and i copy/paste from the web interface, which i guess is no longer in vogue. that being said, would cursor excel here at a one-shot (or even a few hours of back-and-forth), elegant prototype?
For stuff like adding generating and integrating new modules: the helpfulness of AI varies wildly.
If you’re using nest.js, which is great but also comically bloated with boilerplate, AI is fantastic. When my code is like 1 line of business logic per 6 lines of boilerplate, yes please AI do it all for me.
Projects with less cruft benefit less. I’m working on a form generator mini library, and I struggle to think of any piece I would actually let AI write for me.
Similar situation with tests. If your tests are mostly “mock x y and z, and make sure that this spied function is called with this mocked payload result”, AI is great. It’ll write all that garbage out in no time.
If your tests are doing larger chunks of biz logic like running against a database, or if you’re doing some kinda generative property based testing, LLMs are probably more trouble than they’re worth
To do those things, I do the same thing I've been doing for the thirty years that I've been programming professionally: I spend the (typically modest) time it takes to learn to understand the code that I am integrating into my project well enough to know how to use it, and I use my brain to convert my ideas into code. Sometimes this requires me to learn new things (a new tool, a new library, etc.). There is usually typing involved, and sometimes a whiteboard or notebook.
Usually it's not all that much effort to glance over some other project's documentation to figure out how to integrate it, and as to creating working code from an idea or plan... isn't that a big part of what "programming" is all about? I'm confused by the idea that suddenly we need machines to do that for us: at a practical level, that is literally what we do. And at a conceptual level, the process of trying to reify an idea into an actual working program is usually very valuable for iterating on one's plans, and identifying problems with one's mental model of whatever you're trying to write a program about (c.f. Naur's notions about theory building).
As to why one should do this manually (as opposed to letting the magic surprise box take a stab at it for you), a few answers come to mind:
1. I'm professionally and personally accountable for the code I write and what it does, and so I want to make sure I actually understand what it's doing. I would hate to have to tell a colleague or customer "no, I don't know why it did $HORRIBLE_THING, and it's because I didn't actually write the program that I gave you, the AI did!"
2. At a practical level, #1 means that I need to be able to be confident that I know what's going on in my code and that I can fix it when it breaks. Fiddling with cmakes and npms is part of how I become confident that I understand what I'm building well enough to deal with the inevitable problems that will occur down the road.
3. Along similar lines, I need to be able to say that what I'm producing isn't violating somebody's IP, and to know where everything came from.
4. I'd rather spend my time making things work right the first time, than endlessly mess around trying to find the right incantation to explain to the magic box what I want it to do in sufficient detail. That seems like more work than just writing it myself.
Now, I will certainly agree that there is a role for LLMs in coding: fancier auto-complete and refactoring tools are great, and I have also found Zed's inline LLM assistant mode helpful for very limited things (basically as a souped-up find and replace feature, though I should note that I've also seen it introduce spectacular and complicated-to-fix errors). But those are all about making me more efficient at interacting with code I've already written, not doing the main body of the work for me.
What do you mean by this? If you just mean moving things around then code refactoring tools to move functions/classes/modules have existed in IDEs for millennia before LLMs came around.
> get someones github project up and running on your machine
docker
> convert an idea or plan.md or a paper into working code
I sit in front of a keyboard and start typing.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
I sit in front of a keyboard, read, think, and then start typing.
> If you do all this manually, why?
Because I care about the quality of my code. If these activities don't interest you, why are you in this field?
> I can kick out some money to essentially "subscribe" for maintenance.
People on HN and other geeky forums keep saying this, but the fact of the matter is that you're a minority and not enough people would do it to actually sustain a product/company like Zed.
It's a code editor so I think the geeky forums are relevant here.
Also, this post is higher on HN than the post about raising capital from Sequoia where many of the comments are about how negatively they view the raising of capital from VC.
The fact of the matter is that people want this and the inability of companies to monetize on that desire says nothing about whether the desire is large enough to "actually sustain" a product/company like Zed.
"Happy to see this". The folks over at Zed did all of the hard work of making the thing, try to make some money, and then someone just forks it to get rid of all of the things they need to put in to make it worth their time developing. I understand if you don't want to pay for Zed - but to celebrate someone making it harder for Zed to make money when you weren't paying them to begin with -"Happy to PLAN to pay for Zed"- is beyond.
The only path forward I see for a classic VC investment is the AI drive.
But I don't think the AI bit is valuable. A powerful plugin system would be sufficient to achieve LLM integration.
So I don't think this is a worthwhile investment unless the product gets a LOT worse and becomes actively awful for users who aren't paying beaucoup bucks for AI tooling- the ROI will have to center the AI drive.
It's not a move that will generate a good outcome for the average user.
I always have mixed feelings about forks. Especially the hard ones. Zed recently rolled out a feature that lets you disable all AI features. I also know telemetry can be opted out. So I don’t see the need for this fork. Especially given the list of features stated. Feels like something that can be upstreamed. Hope that happens
I remember the Redis fork and how it fragmented that ecosystem to a large extent.
I'd see less need for this fork if Zed's creators weren't already doing nefarious things like refusing to allow the Zed account / sign-in features to be disabled.
I don't see a reason to be afraid of "fragmented ecosystems", rather, let's embrace a long tail of tools and the freedom from lock-in and groupthink they bring.
Well there's features within Zed that are part of the account / sign-in process, so it might be a bit more effort to just "simply comment out login" for an editor that is as fast and smooth as Zed, I dont care that its there as long as they dont force it on me, which they don't.
Even opt-in telemetry makes me feel uncomfortable. I am always aware that the software is capable of reporting the size of my underwear and what I had for breakfast this morning at any moment, held back only by a single checkbox. As for the other features, opt-out stuff just feels like a nuisance, having to say "No, I don't want this" over and over again. In some cases it's a matter of balance, but generally I want to lean towards minimalism.
It's nice to have additional assurance that the software won't upload behind your back on first startup. Though I also run opensnitch, belt and suspenders style.
Bit premature to post this, especially without some manifesto explaining the particular reason for this fork. The "no rugpulls" implies something happened with Zed, but you can't really expect every HN reader to be in the loop with the open source controversy of the week.
Contributor Agreements are specifically there for license rug-pulls, so they can change the license in the future as they own all the copyrights. So the fact that they have a CA means they are prepping for a rug-pull and thus this bullet point.
I can’t speak for Zed’s specific case, but several years ago I was part of a project which used a permissive license. I wanted to make it even more permissive, by changing it to one of those essentially-public-domain licenses. The person with the ultimate decision power had no objections and was fine with it, but said we couldn’t do that because we never had Contributor License Agreements. So it cuts both ways.
I’m not sure where this belief came from, or why the people who believe it feel so strongly about it, but this is not generally true.
With the exception of GPL derivatives, most popular licenses such as MIT already include provisions allowing you to relicense or create derivative works as desired. So even if you follow the supposed norm that without an explicit license agreement all open source contributions should be understood to be licensed by contributors under the same terms as the license of the project, this would still allow the project owners to “rug pull” (create a fork under another license) using those contributions.
But given that Zed appears to make their source available under the Apache 2.0 license, the GPL exception wouldn’t apply.
CA means: this is not just a hobby project, it's a business, and we want to retain the power to make business decisions as we see fit.
I don't like the term "rug-pull". It's misleading.
If you have an open source version of Zed today, you can keep it forever, even if future versions switch to closed source or some source-available only model.
CLAs represent an important legal protection, and I would never accept a PR from a stranger, for something being developed in public, without one. They're the simplest way to prove that the contributor consented to licensing the code under the terms of the project license, and a CYA in case the contributed code is e.g. plagiarized from another party.
(I see that I have received two downvotes for this in mere minutes, but no replies. I genuinely don't understand the basis for objecting to what I have to say here, and could not possibly understand it without a counterargument. What I'm saying seems straightforward and obvious to me; I wouldn't say it otherwise.)
Zed is quite well known to be heavily cloud- and AI-focused, it seems clear that's what's motivating this fork. It's not some new controversy, it's just the clearly signposted direction of the project that many don't like.
But a fork with focus on privacy and local-first only needs lack of those to justify itself. It will have to cut some features that zed is really proud of, so it's hard to even say this is a rugpull.
> It will have to cut some features that zed is really proud of
What, they're proud of the telemetery?
The fork claims to make everything opt-in and to not default to any specific vendor, and only to remove things that cannot be self-hosted. What proprietary features have to be cut that Zed people are really proud of?
Today we're announcing our $32M Series B led by Sequoia Capital with participation from our existing investors, bringing our total funding to over $42M. - zed.dev
What I really want from Zed is multi window support. Currently, I can’t pop out the agent panel or any other panels to use them on another monitor.
Local-first is nice, but I do use the AI tools, so I’m unlikely to use this fork in the near term. I do like the idea behind this, especially no telemetry and no contributor agreements. I wish them the best of luck.
I did happily use Zed for about year before using any of its AI features, so who knows, maybe I’ll get fed up with AI and switch to this eventually.
Yes same here. I tried it out because of all the discussion about it then saw I couldn’t pop the panel out (or change some really basic settings cursor has had for over a year) then closed and uninstalled it.
> I’m gradually removing all the features I deem undesirable: telemetry, auto-updates, proprietary cloud-only AI integrations, reliance on node.js, auto-downloading of language servers, upsells, the sign-in button, etc. I’m also aiming to make some of the cloud-only features self-hostable where it makes sense, e.g. running Zeta edit predictions off of your own llama.cpp or vLLM instance. It’s currently good enough to be my main editor, though I tend to be a bit behind on updates since there is a lot of code churn and my way of modifying the codebase isn’t exactly ideal for avoiding merge conflicts. To that end I’m experimenting with using tree-sitter to automatically apply AST-level edits, which might end up becoming a tool that can build customizable “unshittified” versions of Zed.
For Zed specifically? It cuts directly against their stated goal of being fast and resource-light. Moreover, it is not acceptable for software I use to automatically download and run third-party software without asking me.
For node.js in general? The language isn't even considered good in the browser, for which it was invented. It is absolutely insane to then try to turn it into a standalone programming language. There are so many better options available, use one of them! Reusing a crappy tool just because it's what you know is a mark of very poor craftsmanship.
It shouldn't be as tightly integrated into the editor as it is. Zed uses it for a lot of things, including to install various language servers and other things via NPM, which is just nasty.
You might not be old enough to remember how much everyone hated JavaScript initially - just as an in-browser language. Then suddenly it's a standalone programming language too? WTH??
I assume that's where a lot of the hate comes from. Note that's not my opinion, just wondering if that might be why.
I guess some node.js based tools that are included in Zed (or its language extensions) such as ‘prettier’ don’t behave well in some environments (e.g., they constantly try to write files to /home/$USER even if that’s not your home directory). Things like that create some backlash.
Slow and ram heavy. Zed feels refreshingly snappy compared to vscode even before adding plugins. And why does desktop application need to use interpreted programming languages?
Love seeing privacy-first approaches to dev tools. This is the same philosophy we apply to compliance tooling.
Your code, your compliance data, your business processes - these shouldn't have to live in someone else's cloud by default. Sometimes local processing isn't just about privacy, it's about performance and reliability.
The big platforms want you dependent on their infrastructure. Tools that work offline and keep your data local give you actual control.
Props to the Zedless team for prioritizing user agency over SaaS revenue models.
The CLA does not change the copyright owner of the contributed content (https://zed.dev/cla), so I'm confused by the project's comments on copyright reassignment.
Maybe not technically correct but it's still the gist of this line, no?
> Subject to the terms and conditions of this Agreement, You hereby grant to Company, and to recipients of software distributed by Company related hereto, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute, Your Contributions and such derivative works (the “Contributor License Grant”).
They are allowed to use your contribution in a derivative work under another license and/or sublicense your contribution.
It's technically not copyright reassignment though.
Yes, you grant the entity you've submitted a contribution to, to use (not own) your contribution in whatever it ends up in. That was the whole point of the developer's contribution right?
It may not technically reassign copyright, but it grants them permission to do whatever they want with your contributions, which seems pretty equivalent in terms of outcome.
Yes, you grant the entity you've submitted a contribution to, to use (not own) your contribution in whatever it ends up in. That was the whole point of the developer's contribution right?
I've been using AI extensivly the last few weeks but not as a coding agent. I really don't trust it for that. Its really helpful for generating example code for a library I might not be familiar with. a month ago, I was interested in using rabbitmq but he docs were limited. chatgpt was able to give me a fairly good amount of starter code to see how these things are wired together. I used some of it and added to it by hand to finally come up with what is running in production. It certainly has value in that regard. Letting it write and modify code directly? I'm not ready for that. other things its useful for is finding the source of an error when the error message isnt' so great. I'll usually copy paste code that I know is causing the error along with the error message and it'll point out the issues in a way that I can immediatly address. My method is cheaper too, I can get by just fine on the $20/month chatgpt sub doing that.
Shouldn’t this just be a pull request to Zed itself that hides AI features of behind behind compile flags? That way the ‘fork’ will be just a build command with different set of flags with no changes to the actual code?
I don’t view Chrome and Chromium as different projects, but primarily as different builds of the same project. I feel like this will (eventually) go the same way.
I loved Zed Editor, Infact i was using it all time but being a "programmer", i wanted to extend it with "extensions", it was hard for me to roll out my rust extension, with apis and stuff missing.
I went ahead with Vscode, I had to spend 2 hours to make it look like Zeditor with configs, but i was able to roll out extension in javascript much faster and VScode has lot of APIs available for extensions to consume.
I'm confused how the "contributors" feature works on GitHub, is this showing that this fork has 986 contributors and 29,961 commits? Surely that's the Zed project overall. I feel like this gives undue reputation to an offshoot project.
It's fair because those people contributed to the codebase you're seeing. Someone can't fork a repo, make a couple commits, and then have GitHub show them as the sole contributor.
Yeah it looks pretty funny. Probably happens because it's not a fork as far as GitHub is concerned (had some problems with that). Looking at PR creators should give you a better idea. It's basically just me right now.
Yeah i get it, it looks like zedless itself has been going on for a while. However, i'm not sure what's the best way to approach this, the fork still carries zed's original commit history
This just reminded me that I have Zed installed but haven't used it at all yet. Neovim is a bit too sticky with all my custom shortcuts. Will uninstall it and try this version out when I eventually decide to migrate
“Privacy focus” that states to “No reliance on proprietary cloud services” should not hypocritically lock their code & collaboration behind Microsoft’s GitHub.
I on the other hand would probably only switch to Zed with the AI integration. Want to learn a new language? Using AI speeds it up by a factor of months.
Zed is a really really nice editor. I consider the AI features secondary but they have been useful here and there. (I usually have them off.) You can use it like cursor if you want to.
Where I think it gets really interesting is they are adding features in it to compete with slack. Imagine a tight integration between slack huddles and VS code's collaborative editing. Since it's from scratch it's much nicer than both. I'm really excited about it.
An AI editor, a competitor to Cursor but written from scratch and not a VS Code fork. They recently announced a funding round from Sequoia. https://news.ycombinator.com/item?id=44961172
I don't understand why people say X is a competitor to Cursor, which is built on Visual Studio Code, when GitHub Copilot came out first, and is... built on Visual Studio Code.
It also didn't start out as a competitor to either.
Code editor. Imagine VSCode, but with a native GUI for each platform it supports and fewer plugins. And a single `disable_ai` setting that you can use to toggle those kinds of features off or on.
Watch the video on https://zed.dev/, apparently it's really good at quickly cycling through open documents at 120Hz while still seeing every individual tab. Probably something people asked for at some point.
The reason I’ve been using Zed is _because_ there is no screwing about with any of that stuff. For Erlang and Elixir it’s been less problematic than IntelliJ, faster and less gross than VS code, and hasn’t required me to edit configuration files other than to turn the font size up.
This is awesome, honestly with the release of Qwen3Coder-30B-A3B, we have a model that’s pretty close to the perfect local model. Obviously the larger 32B dense one does better but the 30B MoE model does agentic pretty well and is great at FIM/autocomplete
I welcome this, now we get Zed for free with privacy on top without all the AI features that nobody asked for.
As soon as any dev tool gets VC backing there should be an open source alternative to alleviate the inevitable platform decay (or enshittification for lack of a better word)
Sums up the problem neatly. Everyone wants everything for free. Someone has to pay the developers. Sometimes things align (there is indeed a discussion in LinkedIn about Apple hiring the OPA devs today), mostly it doesn’t.
Agreed. Although nobody ever mentions the 1,100+ developers that submitted PRs to Zed.
And yeah. I know what you mean. But this is the other side of the OSS coin. You accept free work from outside developers, and it will inevitably get forked because of an issue. But from my perspective, it's a great thing for the community. We're all standing on the shoulders of giants here.
It kind of is. I don't want Richard Stallman knowing every time I open a file in emacs or run the ls command. Keep that crap out of local software. There should be better ways to get adoption metrics for your investors, like creating a package manager for your software, or partnering with security companies like Wiz. If you have telemetry, make it opt-in, and help users understand that it benefits them by being a vote in what bugs get fixed and what features get focused on. Then publish public reports that aggregate the telemetry data for transparency like Mozilla and Debian.
No. It's spyware. Software authors/vendors have no right to collect telemetry and it ought to be illegal to have any such data collection and/or exfiltration running on a user's device by default or without explicit, opt-in consent.
It already is in Europe thanks to GDPR. Just not enough formal complaints or lawsuits (yet); e.g. IP addresses are explicitly Personally Identifiable Information.
Why? Any non-opt-in product telemetry is spyware, and you have no idea what they'll do with the data. And if it's an AI company, there's an obvious thing for them to do with it.
(Opt-in telemetry is much more reasonable, if it's clear what they're doing with it.)
Collection of data from code completions is off by default and opt-in. It also only collects data when one of several allowlisted opensource licenses are present in the worktree root.
Options to disable crash reports and anonymous usage info are presented prominently when Zed is first opened, and can of course be configured in settings too.
If it collects information from someone, and they don't want it to, then it is spying.
I am deeply disappointed in how often I encounter social pressure, condescending comments, license terms, dark patterns, confidentiality assurances, anonymization claims, and linguistic gymnastics trying to either convince me otherwise or publicly discredit me for pointing it out. No amount of those things will change the fact that it is spyware, but they do make the world an even worse place than the spyware itself does, and they do make clear that the people behind them are hostile actors.
On the same day a Code of Conduct violation discussion was opened against Zed for accepting funding from Sequoia after Maguire's very loud and very public Islamophobia and open support for occupation and genocide: https://github.com/zed-industries/zed/discussions/36604
I'm glad to see this. I'm happy to plan to pay for Zed - its not there yet but its well on its way - But I don't want essentially _any_ of the AI and telemetry features.
The fact of the matter is, I am not even using AI features much in my editor anymore. I've tried Copilot and friends over and over and it's just not _there_. It needs to be in a different location in the software development pipeline (Probably code reviews and RAG'ing up for documentation).
- I can kick out some money for a settings sync service. - I can kick out some money to essentially "subscribe" for maintenance.
I don't personally think that an editor is going to return the kinds of ROI VCs look for. So.... yeah. I might be back to Emacs in a year with IntelliJ for powerful IDE needs....
I'm happy to finally see this take. I've been feeling pretty left out with everyone singing the praises of AI-assisted editors while I struggle to understand the hype. I've tried a few and it's never felt like an improvement to my workflow. At least for my team, the actual writing of code has never been the problem or bottleneck. Getting code reviewed by someone else in a timely manner has been a problem though, so we're considering AI code reviews to at least take some burden out of the process.
AI code reviews are the worst place to introduce AI, in my experience. They can find a few things quickly, but they can also send people down unnecessary paths or be easily persuaded by comments or even the slightest pushback from someone. They're fast to cave in and agree with any input.
It can also encourage laziness: If the AI reviewer didn't spot anything, it's easier to justify skimming the commit. Everyone says they won't do it, but it happens.
For anything AI related, having manual human review as the final step is key.
8 replies →
IMO, the AI bits are the least interesting parts of Zed. I hardly use them. For me, Zed is a blazing fast, lightweight editor with a large community supporting plugins and themes and all that. It's not exactly Sublime Text, but to me it's the nearest spiritual successor while being fully GPL'ed Free Software.
I don't mind the AI stuff. It's been nice when I used it, but I have a different workflow for those things right now. But all the stuff besides AI? It's freaking great.
18 replies →
I found the OP comment amusing because Emacs with a Jetbrains IDE when I need it is exactly my setup. The only thing I've found AI to be consistently good for is spitting out boring boilerplate so I can do the fun parts myself.
I always hear this "writing code isn't the bottleneck" used when talking about AI, as if there are chosen few engineers who only work on completely new and abstract domains that require a PhD and 20 years of experience that an LLM can not fathom.
Yes, you're right, AI cannot be a senior engineer with you. It can take a lot of the grunt work away though, which is still part of the job for many devs at all skill levels. Or it's useful for technologies you're not as well versed in. Or simply an inertia breaker if you're not feeling very motivated for getting to work.
Find what it's good for in your workflows and try it for that.
3 replies →
Highlighting code and having cursor show the recommended changes and make them for me with one click is just a time saver over me copying and pasting back and forth to an external chat window. I don’t find the autocomplete particularly useful, but the inbuilt chat is a useful feature honestly.
I'm the opposite. I held out this view for a long, long time. About two months ago, I gave Zed's agentic sidebar a try.
I'm blown away.
I'm a very senior engineer. I have extremely high standards. I know a lot of technologies top to bottom. And I have immediately found it insanely helpful.
There are a few hugely valuable use-cases for me. The first is writing tests. Agentic AI right now is shockingly good at figuring out what your code should be doing and writing tests that test the behavior, all the verbose and annoying edge cases, and even find bugs in your implementation. It's goddamn near magic. That's not to say they're perfect, sometimes they do get confused and assume your implementation is correct when the test doesn't pass. Sometimes they do misunderstand. But the overall improvement for me has been enormous. They also generally write good tests. Refactoring never breaks the tests they've written unless an actually-visible behavior change has happened.
Second is trying to figure out the answer to really thorny problems. I'm extremely good at doing this, but agentic AI has made me faster. It can prototype approaches that I want to try faster than I can and we can see if the approach works extremely quickly. I might not use the code it wrote, but the ability to rapidly give four or five alternatives a go versus the one or two I would personally have time for is massively helpful. I've even had them find approaches I never would have considered that ended up being my clear favorite. They're not always better than me at choosing which one to go with (I often ask for their summarized recommendations), but the sheer speed in which they get them done is a godsend.
Finding the source of tricky bugs is one more case that they excel in. I can do this work too, but again, they're faster. They'll write multiple tests with debugging output that leads to the answer in barely more time than it takes to just run the tests. A bug that might take me an hour to track down can take them five minutes. Even for a really hard one, I can set them on the task while I go make coffee or take the dog for a walk. They'll figure it out while I'm gone.
Lastly, when I have some spare time, I love asking them what areas of a code base could use some love and what are the biggest reward-to-effort ratio wins. They are great at finding those places and helping me constantly make things just a little bit better, one place at a time.
Overall, it's like having an extremely eager and prolific junior assistant with an encyclopedic brain. You have to give them guidance, you have to take some of their work with a grain of salt, but used correctly they're insanely productive. And as a bonus, unlike a real human, you don't ever have to feel guilty about throwing away their work if it doesn't make the grade.
9 replies →
AI is solid for kicking off learning a language or framework you've never touched before.
But in my day to day I'm just writing pure Go, highly concurrent and performance-sensitive distributed systems, and AI is just so wrong on everything that actually matters that I have stopped using it.
7 replies →
zed was just a fast and simple replacement for Atom (R.I.P) or vscode. Then they put AI on top when that showed up. I don't care for it, and appreciate a project like this to return the program to its core.
You can opt out of AI features in Zed [0].
[0] https://zed.dev/blog/disable-ai-features
Opt-out instead of opt-in is an anti-feature.
3 replies →
How to opt-out of unrequested pop-ups and various helpers, or download and installation of binary files without permission?
Can't you just not use / disable AI and telemetry? It's not shoved in your face.
I would prefer an off-by-default telemetry, but if there's a simple opt-out, that's fine?
You can't disable the culture.
1 reply →
It's a question of the business model.
Well said, Zed could be great if they just stopped with the AI stuff and focused on text editing.
Just to echo the sentiment, I've had struggles trying to figure out how to use LLMs in my daily work.
I've landed on using it as part of my code review process before asking someone to review my PR. I get a lot of the nice things that LLMs can give me (a second set of eyes, a somewhat consistent reviewer) but without the downsides (no waiting on the agent to finish writing code that may not work, costs me personally nothing in time and effort as my Org pays for the LLM, when it hallucinates I can easily ignore it).
Have you considered sublime text as the lightweight editor?
I think you and I are having very different experiences with these copilot/agents. So I have questions for you, how do you:
- generate new modules/classes in your projects - integrate module A into module B or entire codebase A into codebase B?
- get someones github project up and running on your machine, do you manually fiddle with cmakes and npms?
- convert an idea or plan.md or a paper into working code?
- Fix flakes, fix test<->code discrepancies or increase coverage etc
If you do all this manually, why?
> generate new modules/classes in your projects
If it's formulaic enough, I will use the editor templates/snippets generator. Or write a code generator (if it involves a bunch of files). If it's not, I probably have another class I can copy and strip out (especially in UI and CRUD).
> integrate module A into module B
If it's cannot be done easily, that's the sign of a less than optimal API.
> entire codebase A into codebase B
Is that a real need?
> get someones github project up and running on your machine, do you manually fiddle with cmakes and npms
If the person can't be bothered to give proper documentation, why should I run the project? But actually, I will look into AUR (archlinux) and Homebrew formula if someone has already do the first jobs of figuring dependency version. If there's a dockerfile, I will use that instead.
> convert an idea or plan.md or a paper into working code?
Iteratively. First have an hello world or something working, then mowing down the task list.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
Either the test is wrong or the code is wrong. Figure out which and rework it. The figuring part always take longer as you will need to ask around.
> If you do all this manually, why?
Because when something happens in prod, you really don't want that feeling of being the last one that have interacted with that part, but with no idea of what has changed.
To me, using AI to convert an idea or paper into working code is outsourcing the only enjoyable part of programming to a machine. Do we not appreciate problem solving anymore? Wild times.
25 replies →
I'm pretty fast coding and know what I'm doing. My ideas are too complex for claude to just crap out. If I'm really tired I'll use claude to write tests. Mostly they aren't really good though.
AI doesn't really help me code vs me doing it myself.
AI is better doing other things...
1 reply →
> how do you convert a paper into working code?
this is something i've found LLMs almost useless at. consider https://arxiv.org/abs/2506.11908 --- the paper explains its proposed methodology pretty well, so i figured this would be a good LLM use case. i tried to get a prototype to run with gemini 2.5 pro, but got nowhere even after a couple of hours, so i wrote it by hand; and i write a fair bit of code with LLMs, but it's primarily questions about best practices or simple errors, and i copy/paste from the web interface, which i guess is no longer in vogue. that being said, would cursor excel here at a one-shot (or even a few hours of back-and-forth), elegant prototype?
2 replies →
For stuff like adding generating and integrating new modules: the helpfulness of AI varies wildly.
If you’re using nest.js, which is great but also comically bloated with boilerplate, AI is fantastic. When my code is like 1 line of business logic per 6 lines of boilerplate, yes please AI do it all for me.
Projects with less cruft benefit less. I’m working on a form generator mini library, and I struggle to think of any piece I would actually let AI write for me.
Similar situation with tests. If your tests are mostly “mock x y and z, and make sure that this spied function is called with this mocked payload result”, AI is great. It’ll write all that garbage out in no time.
If your tests are doing larger chunks of biz logic like running against a database, or if you’re doing some kinda generative property based testing, LLMs are probably more trouble than they’re worth
To do those things, I do the same thing I've been doing for the thirty years that I've been programming professionally: I spend the (typically modest) time it takes to learn to understand the code that I am integrating into my project well enough to know how to use it, and I use my brain to convert my ideas into code. Sometimes this requires me to learn new things (a new tool, a new library, etc.). There is usually typing involved, and sometimes a whiteboard or notebook.
Usually it's not all that much effort to glance over some other project's documentation to figure out how to integrate it, and as to creating working code from an idea or plan... isn't that a big part of what "programming" is all about? I'm confused by the idea that suddenly we need machines to do that for us: at a practical level, that is literally what we do. And at a conceptual level, the process of trying to reify an idea into an actual working program is usually very valuable for iterating on one's plans, and identifying problems with one's mental model of whatever you're trying to write a program about (c.f. Naur's notions about theory building).
As to why one should do this manually (as opposed to letting the magic surprise box take a stab at it for you), a few answers come to mind:
1. I'm professionally and personally accountable for the code I write and what it does, and so I want to make sure I actually understand what it's doing. I would hate to have to tell a colleague or customer "no, I don't know why it did $HORRIBLE_THING, and it's because I didn't actually write the program that I gave you, the AI did!"
2. At a practical level, #1 means that I need to be able to be confident that I know what's going on in my code and that I can fix it when it breaks. Fiddling with cmakes and npms is part of how I become confident that I understand what I'm building well enough to deal with the inevitable problems that will occur down the road.
3. Along similar lines, I need to be able to say that what I'm producing isn't violating somebody's IP, and to know where everything came from.
4. I'd rather spend my time making things work right the first time, than endlessly mess around trying to find the right incantation to explain to the magic box what I want it to do in sufficient detail. That seems like more work than just writing it myself.
Now, I will certainly agree that there is a role for LLMs in coding: fancier auto-complete and refactoring tools are great, and I have also found Zed's inline LLM assistant mode helpful for very limited things (basically as a souped-up find and replace feature, though I should note that I've also seen it introduce spectacular and complicated-to-fix errors). But those are all about making me more efficient at interacting with code I've already written, not doing the main body of the work for me.
So that's my $0.02!
> generate new modules/classes in your projects
I type:
or:
> integrate module A into module B
What do you mean by this? If you just mean moving things around then code refactoring tools to move functions/classes/modules have existed in IDEs for millennia before LLMs came around.
> get someones github project up and running on your machine
docker
> convert an idea or plan.md or a paper into working code
I sit in front of a keyboard and start typing.
> Fix flakes, fix test<->code discrepancies or increase coverage etc
I sit in front of a keyboard, read, think, and then start typing.
> If you do all this manually, why?
Because I care about the quality of my code. If these activities don't interest you, why are you in this field?
9 replies →
didn't Zed recently add a config option to disable all AI features?
> I can kick out some money to essentially "subscribe" for maintenance.
People on HN and other geeky forums keep saying this, but the fact of the matter is that you're a minority and not enough people would do it to actually sustain a product/company like Zed.
It's a code editor so I think the geeky forums are relevant here.
Also, this post is higher on HN than the post about raising capital from Sequoia where many of the comments are about how negatively they view the raising of capital from VC.
The fact of the matter is that people want this and the inability of companies to monetize on that desire says nothing about whether the desire is large enough to "actually sustain" a product/company like Zed.
“I tried the worst one”
"Happy to see this". The folks over at Zed did all of the hard work of making the thing, try to make some money, and then someone just forks it to get rid of all of the things they need to put in to make it worth their time developing. I understand if you don't want to pay for Zed - but to celebrate someone making it harder for Zed to make money when you weren't paying them to begin with -"Happy to PLAN to pay for Zed"- is beyond.
I pay for intellij. I pay for Obsidian.
I would pay for zed.
The only path forward I see for a classic VC investment is the AI drive.
But I don't think the AI bit is valuable. A powerful plugin system would be sufficient to achieve LLM integration.
So I don't think this is a worthwhile investment unless the product gets a LOT worse and becomes actively awful for users who aren't paying beaucoup bucks for AI tooling- the ROI will have to center the AI drive.
It's not a move that will generate a good outcome for the average user.
> I understand if you don't want to pay for Zed
But he does say he does want to pay!
I always have mixed feelings about forks. Especially the hard ones. Zed recently rolled out a feature that lets you disable all AI features. I also know telemetry can be opted out. So I don’t see the need for this fork. Especially given the list of features stated. Feels like something that can be upstreamed. Hope that happens
I remember the Redis fork and how it fragmented that ecosystem to a large extent.
I'd see less need for this fork if Zed's creators weren't already doing nefarious things like refusing to allow the Zed account / sign-in features to be disabled.
I don't see a reason to be afraid of "fragmented ecosystems", rather, let's embrace a long tail of tools and the freedom from lock-in and groupthink they bring.
For what they provide, for free, I'd say refusing to disable login is not "nefarious". They need to grow a business here.
2 replies →
Well there's features within Zed that are part of the account / sign-in process, so it might be a bit more effort to just "simply comment out login" for an editor that is as fast and smooth as Zed, I dont care that its there as long as they dont force it on me, which they don't.
I have this take, too. I tried to show how valuable this is to me via github issue, but the lack of an answer is pretty clearly a "don't care."
Even opt-in telemetry makes me feel uncomfortable. I am always aware that the software is capable of reporting the size of my underwear and what I had for breakfast this morning at any moment, held back only by a single checkbox. As for the other features, opt-out stuff just feels like a nuisance, having to say "No, I don't want this" over and over again. In some cases it's a matter of balance, but generally I want to lean towards minimalism.
What makes me uncomfortable is that people with your opinion have to defend their position.
I think your thinking is common sense.
2 replies →
Automatic crash reporting is very useful if you want stable software.
I'm one of the people interested in Zed for the editor tech but disheartened with all the AI by default stuff.
opt-out is not enough, specially in a program where opt-out happens via text-only config files.
I can never know if I've correctly opted out of all the things I don't want.
What interests you about Zed that is not already covered by Sublime?
1 reply →
This is why we shouldn't open source things.
All of that hard work, intended to build a business, and nobody is happy.
Now there's a hard fork.
This is shitty.
3 replies →
It's nice to have additional assurance that the software won't upload behind your back on first startup. Though I also run opensnitch, belt and suspenders style.
Not to mention Zed is already open source. I guess the best thing Zed can do is make it all opt-in by default, then this fork is rendered useless.
This fork is useful as a zero user value auto filter for zed.
Bit premature to post this, especially without some manifesto explaining the particular reason for this fork. The "no rugpulls" implies something happened with Zed, but you can't really expect every HN reader to be in the loop with the open source controversy of the week.
Contributor Agreements are specifically there for license rug-pulls, so they can change the license in the future as they own all the copyrights. So the fact that they have a CA means they are prepping for a rug-pull and thus this bullet point.
I can’t speak for Zed’s specific case, but several years ago I was part of a project which used a permissive license. I wanted to make it even more permissive, by changing it to one of those essentially-public-domain licenses. The person with the ultimate decision power had no objections and was fine with it, but said we couldn’t do that because we never had Contributor License Agreements. So it cuts both ways.
4 replies →
I’m not sure where this belief came from, or why the people who believe it feel so strongly about it, but this is not generally true.
With the exception of GPL derivatives, most popular licenses such as MIT already include provisions allowing you to relicense or create derivative works as desired. So even if you follow the supposed norm that without an explicit license agreement all open source contributions should be understood to be licensed by contributors under the same terms as the license of the project, this would still allow the project owners to “rug pull” (create a fork under another license) using those contributions.
But given that Zed appears to make their source available under the Apache 2.0 license, the GPL exception wouldn’t apply.
2 replies →
CA means: this is not just a hobby project, it's a business, and we want to retain the power to make business decisions as we see fit.
I don't like the term "rug-pull". It's misleading.
If you have an open source version of Zed today, you can keep it forever, even if future versions switch to closed source or some source-available only model.
3 replies →
CLAs represent an important legal protection, and I would never accept a PR from a stranger, for something being developed in public, without one. They're the simplest way to prove that the contributor consented to licensing the code under the terms of the project license, and a CYA in case the contributed code is e.g. plagiarized from another party.
(I see that I have received two downvotes for this in mere minutes, but no replies. I genuinely don't understand the basis for objecting to what I have to say here, and could not possibly understand it without a counterargument. What I'm saying seems straightforward and obvious to me; I wouldn't say it otherwise.)
4 replies →
Are you suggesting the FSF has a copyright assignment for the purposes of “rug pulls”?
7 replies →
Zed is quite well known to be heavily cloud- and AI-focused, it seems clear that's what's motivating this fork. It's not some new controversy, it's just the clearly signposted direction of the project that many don't like.
I remember it started out as a native app editor that is all about speed. I think it only started focusing on AI after LLMs blew up.
1 reply →
Seems like it might be reacting to or fanned to flame by: https://github.com/zed-industries/zed/discussions/36604
No, this fork is at least 6 months old. The first PR is dated February 13th.
1 reply →
That's not a rug pull, that's a few overly sensitive young 'uns complaining
38 replies →
[flagged]
3 replies →
They got a VC investment.
But a fork with focus on privacy and local-first only needs lack of those to justify itself. It will have to cut some features that zed is really proud of, so it's hard to even say this is a rugpull.
> It will have to cut some features that zed is really proud of
What, they're proud of the telemetery?
The fork claims to make everything opt-in and to not default to any specific vendor, and only to remove things that cannot be self-hosted. What proprietary features have to be cut that Zed people are really proud of?
https://github.com/zedless-editor/zedless?tab=readme-ov-file...
As far as I know, the Zed people have open sourced their collab server components (as AGPLv3), at least well enough to self-host. For example, https://github.com/zed-industries/zed/blob/main/docs/src/dev... -- AFAIK it's just https://github.com/livekit/livekit
The AI stuff will happily talk to self-hosted models, or OpenAI API lookalikes.
Today we're announcing our $32M Series B led by Sequoia Capital with participation from our existing investors, bringing our total funding to over $42M. - zed.dev
I’m curious how this will turn out. Reminds me of the node.js fork IO.js and how that shifted the way node was being developed.
If there’s a group of people painfully aware of telemetry and AI being pushed everywhere is devs…
Related ongoing threads:
Zed for Windows: What's Taking So Long? - https://news.ycombinator.com/item?id=44961172
What I really want from Zed is multi window support. Currently, I can’t pop out the agent panel or any other panels to use them on another monitor.
Local-first is nice, but I do use the AI tools, so I’m unlikely to use this fork in the near term. I do like the idea behind this, especially no telemetry and no contributor agreements. I wish them the best of luck.
I did happily use Zed for about year before using any of its AI features, so who knows, maybe I’ll get fed up with AI and switch to this eventually.
Yes same here. I tried it out because of all the discussion about it then saw I couldn’t pop the panel out (or change some really basic settings cursor has had for over a year) then closed and uninstalled it.
Comment from the author: https://lobste.rs/c/wmqvug
> Since someone mentioned forking, I suppose I’ll use this opportunity to advertise my fork of Zed: https://github.com/zedless-editor/zed
> I’m gradually removing all the features I deem undesirable: telemetry, auto-updates, proprietary cloud-only AI integrations, reliance on node.js, auto-downloading of language servers, upsells, the sign-in button, etc. I’m also aiming to make some of the cloud-only features self-hostable where it makes sense, e.g. running Zeta edit predictions off of your own llama.cpp or vLLM instance. It’s currently good enough to be my main editor, though I tend to be a bit behind on updates since there is a lot of code churn and my way of modifying the codebase isn’t exactly ideal for avoiding merge conflicts. To that end I’m experimenting with using tree-sitter to automatically apply AST-level edits, which might end up becoming a tool that can build customizable “unshittified” versions of Zed.
> relying on node.js
When did people start hating node and what do they have against it?
For Zed specifically? It cuts directly against their stated goal of being fast and resource-light. Moreover, it is not acceptable for software I use to automatically download and run third-party software without asking me.
For node.js in general? The language isn't even considered good in the browser, for which it was invented. It is absolutely insane to then try to turn it into a standalone programming language. There are so many better options available, use one of them! Reusing a crappy tool just because it's what you know is a mark of very poor craftsmanship.
> When did people start hating node
You're kidding, right?
1 reply →
It shouldn't be as tightly integrated into the editor as it is. Zed uses it for a lot of things, including to install various language servers and other things via NPM, which is just nasty.
You might not be old enough to remember how much everyone hated JavaScript initially - just as an in-browser language. Then suddenly it's a standalone programming language too? WTH??
I assume that's where a lot of the hate comes from. Note that's not my opinion, just wondering if that might be why.
3 replies →
I guess some node.js based tools that are included in Zed (or its language extensions) such as ‘prettier’ don’t behave well in some environments (e.g., they constantly try to write files to /home/$USER even if that’s not your home directory). Things like that create some backlash.
Slow and ram heavy. Zed feels refreshingly snappy compared to vscode even before adding plugins. And why does desktop application need to use interpreted programming languages?
For me, upon its inception. We desperately needed unity in API design and node.js hasn't been adequate for many of us.
WinterTC has only recently been chartered in order to make strides towards specifying a unified standard library for the JS ecosystem.
Love seeing privacy-first approaches to dev tools. This is the same philosophy we apply to compliance tooling.
Your code, your compliance data, your business processes - these shouldn't have to live in someone else's cloud by default. Sometimes local processing isn't just about privacy, it's about performance and reliability. The big platforms want you dependent on their infrastructure. Tools that work offline and keep your data local give you actual control.
Props to the Zedless team for prioritizing user agency over SaaS revenue models.
Thank you.
That's all I have to say right now, but I feel it needs to be said. Thank you for doing this.
The CLA does not change the copyright owner of the contributed content (https://zed.dev/cla), so I'm confused by the project's comments on copyright reassignment.
Maybe not technically correct but it's still the gist of this line, no?
> Subject to the terms and conditions of this Agreement, You hereby grant to Company, and to recipients of software distributed by Company related hereto, a perpetual, worldwide, non-exclusive, no-charge, royalty-free, irrevocable copyright license to reproduce, prepare derivative works of, publicly display, publicly perform, sublicense, and distribute, Your Contributions and such derivative works (the “Contributor License Grant”).
They are allowed to use your contribution in a derivative work under another license and/or sublicense your contribution.
It's technically not copyright reassignment though.
Yes, you grant the entity you've submitted a contribution to, to use (not own) your contribution in whatever it ends up in. That was the whole point of the developer's contribution right?
4 replies →
It may not technically reassign copyright, but it grants them permission to do whatever they want with your contributions, which seems pretty equivalent in terms of outcome.
Yes, you grant the entity you've submitted a contribution to, to use (not own) your contribution in whatever it ends up in. That was the whole point of the developer's contribution right?
1 reply →
Would be wise to not invoke their name, which is trademarked.
I've been using AI extensivly the last few weeks but not as a coding agent. I really don't trust it for that. Its really helpful for generating example code for a library I might not be familiar with. a month ago, I was interested in using rabbitmq but he docs were limited. chatgpt was able to give me a fairly good amount of starter code to see how these things are wired together. I used some of it and added to it by hand to finally come up with what is running in production. It certainly has value in that regard. Letting it write and modify code directly? I'm not ready for that. other things its useful for is finding the source of an error when the error message isnt' so great. I'll usually copy paste code that I know is causing the error along with the error message and it'll point out the issues in a way that I can immediatly address. My method is cheaper too, I can get by just fine on the $20/month chatgpt sub doing that.
Shouldn’t this just be a pull request to Zed itself that hides AI features of behind behind compile flags? That way the ‘fork’ will be just a build command with different set of flags with no changes to the actual code?
I don’t view Chrome and Chromium as different projects, but primarily as different builds of the same project. I feel like this will (eventually) go the same way.
I like to think of the relationship between Zed and Zedless more like Chromium and ungoogled-chromium.
I loved Zed Editor, Infact i was using it all time but being a "programmer", i wanted to extend it with "extensions", it was hard for me to roll out my rust extension, with apis and stuff missing.
I went ahead with Vscode, I had to spend 2 hours to make it look like Zeditor with configs, but i was able to roll out extension in javascript much faster and VScode has lot of APIs available for extensions to consume.
I'm confused how the "contributors" feature works on GitHub, is this showing that this fork has 986 contributors and 29,961 commits? Surely that's the Zed project overall. I feel like this gives undue reputation to an offshoot project.
https://github.com/zedless-editor/zed/graphs/contributors
It's contributors to the codebase you're viewing.
It's fair because those people contributed to the codebase you're seeing. Someone can't fork a repo, make a couple commits, and then have GitHub show them as the sole contributor.
Yeah it looks pretty funny. Probably happens because it's not a fork as far as GitHub is concerned (had some problems with that). Looking at PR creators should give you a better idea. It's basically just me right now.
https://github.com/zedless-editor/zed/pulls?q=is%3Apr+is%3Ac...
It's the zed project overall from the point where the fork was created, plus any downstream merges and unique contributions to zedless
Yeah i get it, it looks like zedless itself has been going on for a while. However, i'm not sure what's the best way to approach this, the fork still carries zed's original commit history
Software engineers: add otel to help debug their own products, while relentlessly protest any telemetry on someone else's
This fork has around 20 net-new commits on it. The Zed repository has around 30,000 commits. This is a wee bit premature, no?
Was it necessary?
I think we would all be clearly worse off if OSS developers collectively decided to limit themselves to what is "necessary".
This feels unnecessary.
This just reminded me that I have Zed installed but haven't used it at all yet. Neovim is a bit too sticky with all my custom shortcuts. Will uninstall it and try this version out when I eventually decide to migrate
I knew it was a matter of time before this happened. I even considered starting it myself, but didn't want the burden of actually maintaining it.
I even thought of calling it zim (zed-improved.. like vim). Glad to see the project!
I think this guy has to be trolling in the testimonials page:
“Privacy focus” that states to “No reliance on proprietary cloud services” should not hypocritically lock their code & collaboration behind Microsoft’s GitHub.
[dead]
Why not just use Sublime Text? It even has LSP! https://lsp.sublimetext.io/
I on the other hand would probably only switch to Zed with the AI integration. Want to learn a new language? Using AI speeds it up by a factor of months.
Zed makes it incredibly easy to both turn of telemetry and to use your own LLM inference endpoints. So why is this needed?
If this project receives yet another fork, might I recommend naming it Zedless Zed Zero?
https://en.m.wikipedia.org/wiki/Zenless_Zone_Zero
Right On! I use Zed and appreciate what the team is building.
So, what‘s Zed?
Zed is a really really nice editor. I consider the AI features secondary but they have been useful here and there. (I usually have them off.) You can use it like cursor if you want to.
Where I think it gets really interesting is they are adding features in it to compete with slack. Imagine a tight integration between slack huddles and VS code's collaborative editing. Since it's from scratch it's much nicer than both. I'm really excited about it.
Zed's dead, baby. Zed’s dead.
Padadadap - Sound of fingers on a leather hood...
An AI editor, a competitor to Cursor but written from scratch and not a VS Code fork. They recently announced a funding round from Sequoia. https://news.ycombinator.com/item?id=44961172
I don't understand why people say X is a competitor to Cursor, which is built on Visual Studio Code, when GitHub Copilot came out first, and is... built on Visual Studio Code.
It also didn't start out as a competitor to either.
It wasn't an AI editor for a long time
1 reply →
Even without any AI stuff, it's a fantastic editor for its speed.
2 replies →
Code editor. Imagine VSCode, but with a native GUI for each platform it supports and fewer plugins. And a single `disable_ai` setting that you can use to toggle those kinds of features off or on.
Watch the video on https://zed.dev/, apparently it's really good at quickly cycling through open documents at 120Hz while still seeing every individual tab. Probably something people asked for at some point.
Spiritual successor to Sublime Text. They’ve been doing a lot of AI stuff but originally just focused on speed.
https://zed.dev/
https://en.wikipedia.org/wiki/Atom_(text_editor)
More like a spiritual successor to Atom, at least per the people that started it who came from that project.
3 replies →
A code editor with a lot of rough edges. If they don't start polishing the turd I doubt the'll make it.
[flagged]
The reason I’ve been using Zed is _because_ there is no screwing about with any of that stuff. For Erlang and Elixir it’s been less problematic than IntelliJ, faster and less gross than VS code, and hasn’t required me to edit configuration files other than to turn the font size up.
Sorry I couldn't hear you through the nvim startup time and keyboard noises while you are waiting for your IDE to start
10 replies →
Harsh but true.
This is awesome, honestly with the release of Qwen3Coder-30B-A3B, we have a model that’s pretty close to the perfect local model. Obviously the larger 32B dense one does better but the 30B MoE model does agentic pretty well and is great at FIM/autocomplete
I would like to try Zed, but it doesn't run on my system due to impenetrable MESA/Vulkan errors with Intel UHD 700, even though vkcube runs fine.
Running a text editor should not be this hard, it's pretty ridiculous. Sublime Text is plenty fast without this nonsense.
I welcome this, now we get Zed for free with privacy on top without all the AI features that nobody asked for.
As soon as any dev tool gets VC backing there should be an open source alternative to alleviate the inevitable platform decay (or enshittification for lack of a better word)
This is a better outcome for everyone.
Some of us just want a good editor for free.
> Some of us just want a good editor for free.
Sums up the problem neatly. Everyone wants everything for free. Someone has to pay the developers. Sometimes things align (there is indeed a discussion in LinkedIn about Apple hiring the OPA devs today), mostly it doesn’t.
> Someone has to pay the developers.
Agreed. Although nobody ever mentions the 1,100+ developers that submitted PRs to Zed.
And yeah. I know what you mean. But this is the other side of the OSS coin. You accept free work from outside developers, and it will inevitably get forked because of an issue. But from my perspective, it's a great thing for the community. We're all standing on the shoulders of giants here.
[dead]
[dead]
[flagged]
???
The first line of the README
> Welcome to Zed, a high-performance, multiplayer code editor from the creators of Atom and Tree-sitter.
The second line of the README (with links to download & package manager instructions omitted)
> Installation
> On macOS and Linux you can download Zed directly or install Zed via your local package manager.
I do not dispute that HN is an echo chamber. But how did you come to your conclusions?
[flagged]
I like this but can we stop calling product telemetry “spyware” please.
It kind of is. I don't want Richard Stallman knowing every time I open a file in emacs or run the ls command. Keep that crap out of local software. There should be better ways to get adoption metrics for your investors, like creating a package manager for your software, or partnering with security companies like Wiz. If you have telemetry, make it opt-in, and help users understand that it benefits them by being a vote in what bugs get fixed and what features get focused on. Then publish public reports that aggregate the telemetry data for transparency like Mozilla and Debian.
It is a tool for developers. Give them a link to your bug tracker and let them tell you themselves.
1 reply →
No. It's spyware. Software authors/vendors have no right to collect telemetry and it ought to be illegal to have any such data collection and/or exfiltration running on a user's device by default or without explicit, opt-in consent.
It already is in Europe thanks to GDPR. Just not enough formal complaints or lawsuits (yet); e.g. IP addresses are explicitly Personally Identifiable Information.
Why? Any non-opt-in product telemetry is spyware, and you have no idea what they'll do with the data. And if it's an AI company, there's an obvious thing for them to do with it.
(Opt-in telemetry is much more reasonable, if it's clear what they're doing with it.)
Collection of data from code completions is off by default and opt-in. It also only collects data when one of several allowlisted opensource licenses are present in the worktree root.
Options to disable crash reports and anonymous usage info are presented prominently when Zed is first opened, and can of course be configured in settings too.
We can stop calling it spyware once it is not spyware (will never happen).
It is spyware tho.
If it collects information from someone, and they don't want it to, then it is spying.
I am deeply disappointed in how often I encounter social pressure, condescending comments, license terms, dark patterns, confidentiality assurances, anonymization claims, and linguistic gymnastics trying to either convince me otherwise or publicly discredit me for pointing it out. No amount of those things will change the fact that it is spyware, but they do make the world an even worse place than the spyware itself does, and they do make clear that the people behind them are hostile actors.
No, we will not stop calling it what it is.
On the same day a Code of Conduct violation discussion was opened against Zed for accepting funding from Sequoia after Maguire's very loud and very public Islamophobia and open support for occupation and genocide: https://github.com/zed-industries/zed/discussions/36604