← Back to context

Comment by norir

3 days ago

Have you ever worried that by programming in this way, you are methodically giving Anthropic all the information it needs to copy your product? If there is any real value in what you are doing, what is to stop Anthropic or OpenAI or whomever from essentially one-shotting Zed? What happens when the model providers 10x their costs and also use the information you've so enthusiastically given them to clone your product and use the money that you paid them to squash you?

That's what things like AWS bedrock are for.

Are you worried about microsoft stealing your codebase from github?

  • Isn’t it widely assumed Microsoft used private repos for LLM training?

    And even with a narrower definition of stealing, Microsoft’s ability to share your code with US government agencies is a common and very legitimate worry in plenty of threat model scenarios.