Comment by lubujackson
16 hours ago
I guess this is Anthropic's "don't be evil" moment, but it has about as much (actually much less) weight then when it was Google's motto. There is always an implicit "...for now".
No business is every going to maintain any "goodness" for long, especially once shareholders get involved. This is a role for regulation, no matter how Anthropic tries to delay it.
At least when Google used the phrase, it had relatively few major controversies. Anthropic, by contrast, works with Palantir:
https://www.axios.com/2024/11/08/anthropic-palantir-amazon-c...
It says: This constitution is written for our mainline, general-access Claude models. We have some models built for specialized uses that don’t fully fit this constitution; as we continue to develop products for specialized use cases, we will continue to evaluate how to best ensure our models meet the core objectives outlined in this constitution.
I wonder what those specialized use cases are and why they need a different set of values. I guess the simplest answer is they mean small fim and tools models but who knows ?
https://www.anthropic.com/news/anthropic-and-the-department-...
> Anthropic incorporated itself as a Delaware public-benefit corporation (PBC), which enables directors to balance stockholders' financial interests with its public benefit purpose.
> Anthropic's "Long-Term Benefit Trust" is a purpose trust for "the responsible development and maintenance of advanced AI for the long-term benefit of humanity". It holds Class T shares in the PBC, which allow it to elect directors to Anthropic's board.
https://en.wikipedia.org/wiki/Anthropic
Google didn't have that.
> This is a role for regulation, no matter how Anthropic tries to delay it.
Regulation like SB 53 that Anthropic supported?
https://www.anthropic.com/news/anthropic-is-endorsing-sb-53
Yes, just like that. Supporting regulation at one point in time does not undermine the point that we should not trust corporations to do the right thing without regulation.
I might trust the Anthropic of January 2026 20% more than I trust OpenAI, but I have no reason to trust the Anthropic of 2027 or 2030.
There's no reason to think it'll be led by the same people, so I agree wholeheartedly.
I said the same thing when Mozilla started collecting data. I kinda trust them, today. But my data will live with their company through who knows what--leadership changes, buyouts, law enforcement actions, hacks, etc.
I don’t think the “for now” is the issue as much as the “nobody thinks they are doing evil” is the issue.