Comment by saghm

15 hours ago

> OpenAI acceded to demands that the US Government can do whatever it wants that is legal. Anthropic wanted to impose its own morals into the use of its products.

What if Anthropic's morals are "we won't sell someone a product for something that it's not realistically capable of doing with a high degree of success? The government can't do what something if it's literally impossible (e.g. "safe" backdoors in encryption), but it's legal for them to attempt even when failure is predetermined. We don't know that's what's going on here, but you haven't provided any evidence that's sufficient to differentiate between those scenarios, so it's fairly misleading to phrase it as fact rather than conjecture.

Isn't it more accurate here to consider OpenAI and Anthropic as service providers rather than a manufacturer of product?

  • The service they provide is on-premises deployment, I guess. But what they are deploying is a product.

    • The relevant (unanswered?) question for this thread is who's operating and managing that deployment, and to what extent provider (or subcontracted FDEs) is involved in integrations. I would be surprised to learn of deployment actually being independently operated. Sure the machinery can be considered a product but associated service- and support engagements are at least as relevant to take into account.