Comment by catapart

1 day ago

[flagged]

They have a single switch that will remove all AI features from the interface. Why do you need more than that? This is not a rhetorical question. I genuinely don't understand it — if you can get all of those features completely out of sight, deactivated, with no trace of them left except that one switch, why is that not enough? Is it as though any kind of AI integration like contaminates the purity of the code or something?

  • I need more than that because I have no guarantee that its true. I need the source. Or I at least need them to provide a build that they promise doesn't have that stuff in it at all, so that if any analysis was done on a decompilation, there would be some level of certainty that they were telling the truth. Anything that leaves any of it in complicates that effort and makes the certainty that less certain.

    RESPONSE EDIT (clear and intentional rate-limit evasion): It's not paranoia; I'm not concerned if they "take" my content. I write open source, CC0 licensed software. I couldn't give a fuck about anyone doing literally anything they want to do with the code I write. Literally take it and call it your own, for all I care. If I can return the interrogation, why are you so concerned with ownership? Why was that the first place your mind jumped to? Paraphrasing: where is the need for this insane level of "if you've got nothing to hide..." submission?

    Like I said: it's about trust. They want me to trust them. You, for some inexplicable reason, seem really upset that I won't trust them. Neither of the parties have given me any reason to trust them. Just insistence that I should, if I want to use their product. And while I entirely agree with that rationale, I don't understand why I get clapbacks for stating that I intend to adhere to that agreement entirely! Won't use the product because they won't give me what I need to trust them. That should be making everyone happy, right? I know I'm happy with the arrangement, at least.

    Aside from all that, and far more relevant to my actual comments: another user pointed out the repository where they DO offer the transparency that I'm asking for. So your entire hissy fit is moot when you could have just pointed out that I was wrong in my understanding of what they offered. I mean, that would have gotten in the way of your sycophantic leap to the defense of the company I was so hellaciously attacking, so I understand why a good capitalist bootlicker might not think of that first, but at least now we both know!

    • You still haven't explained literally anything. Yeah, okay, if there's a switch, you can't be sure that every single AI related code path is fully disabled.

      But if you flip the switch and there isn't any AI integration visible in the interface anymore to bother you, why does it matter whether the code is there or not, or technically active or not? Raw integration points and settings windows don't send data literally anywhere at all until you explicitly configure an API key and a URL or sign in to an AI provider or whatever. It'd have nowhere to go, and AI inference costs money. It's just local code providing a set of integrations. Where is the need for this insane level of paranoia?

    • So you trust a build they say doesn't have AI features, but not a switch that they say turns off the AI features? Doesn't seem like a logically consistent stance to me.

      Plus, you can just packet sniff and see if they're doing anything AI related when the switch is off.

    • RESPONSE TO EDIT:

      You still haven't even answered my question.

      Why are you so concerned by there being AI code in the editor that you need this level of trust?

      The point I have been attempting to make is that needing this level of trust and verification when you can't even explain or articulate what you're worried about at all is weird and confusing and unjustified, and I've been trying to get you to explain what you're so concerned about.

      Required trust/verification should be proportional to what you're concerned about happening.

      > Paraphrasing: where is the need for this insane level of ... submission?

      It is not an "insane level of submission" to point out that trusting that a toggle does what it's supposed to do, when the possible consequences are basically nonexistent — as I said, without a connection to an AI provider set up, where is it going to be sending your data? No one is doing AI inference for free; and now you've even knocked the concern about code ownership out, so again, what really is the concern? — is probably reasonable.

      Also, this is not remotely the equivalent of the old "if you've got nothing to hide" canard, because "if you've got nothing to hide, you've got nothing to worry about" is a justification for surveillance of your personal life; this is literally not doing surveillance. Also, even worrying about surveillance needs to be justified with an actual explanation of what negative things you expect to happen; with surveillance that's obvious; with this, it is not, which is why I'm asking you to explain it.

      > Like I said: it's about trust. They want me to trust them. You, for some inexplicable reason, seem really upset that I won't trust them. Neither of the parties have given me any reason to trust them. Just insistence that I should, if I want to use their product. And while I entirely agree with that rationale, I don't understand why I get clapbacks for stating that I intend to adhere to that agreement entirely! Won't use the product because they won't give me what I need to trust them. That should be making everyone happy, right? I know I'm happy with the arrangement, at least.

      I'm annoyed, because your standards for trust are insane when you can't even articulate what you're trying to guard against. You're getting clapbacks not because you won't use their product if you don't trust them, but because your standards for trust are extremely high and you seem to be completely unable to explain why to any degree, instead just getting irrationally angry at me for just asking you why.

      > So your entire hissy fit is moot when you could have just pointed out that I was wrong in my understanding of what they offered

      You're very clearly the one having a hissy fit lol.

      > I mean, that would have gotten in the way of your sycophantic leap to the defense of the company I was so hellaciously attacking, so I understand why a good capitalist bootlicker might not think of that first, but at least now we both know!

      Ah, and now we get to the personal attacks. Of course.

That actually exists. I initially avoided Zed for similar reasons, then someone pointed me to Gram. It does come with all the drawbacks of disabling AI, telemetry, phone-home and attempting to protect your privacy. It is hard to get started with than Zed, but it's a nice editor.

https://gram.liten.app

I've only tried Zed like a year ago, and personally have no need for an editor I cannot run in the terminal, but couldn't you just like turn off those features and/or not use those features?

As mentioned, I don't know how much in the way they get, if you don't use them, do they get in the way or something of "normal" usage?

  • It's mostly a trust thing. You're asking why I don't trust that if I "turn it off" it will be off. So the answer to that is: every US company I've ever dealt with (eventually, in some cases). I don't want to trust you. I don't want you to trust me. I want to provide you with pure transparency, and I want you to provide me with that. And, if they did that, I would trust them more. Maybe even enough to install something that they swear turns off, if I tell it to (and won't ever, even accidentally, even across sessions/devices/locations/etc). But without that transparency, I don't trust them any more than I trust facebook or google, and I consider any prompting to "just trust them, bro" as simp shit. you trust them. I'm good.

That's silly. Turn it off if you don't want to use it, but don't expect anyone to build a special fork for you.

  • I don't. I'm asking them to let me make my own fork, like I can do with VSCode.

    RESPONSE EDIT (clear and intentional rate-limit evasion): hey thanks! that changes things quite a bit! Now I'm curious how well Claude could vibe-code the AI out of that project. Mostly just for the irony of it, but I can't deny that it would probably be faster than doing it myself - at least to start.

    anyway, I appreciate the simple and straightforward solution without getting side-tracked by how my ignorance and misunderstandings made you feel.

    • You can do that, from their repo it seems it's licensed as GPL3. I'd expect someone to do a fork in the same vein of vscodium or ungoogled-chromium. Maybe call it unzedium.