Comment by solenoid0937
9 hours ago
> whether the company that branded itself as the ethical AI lab actually is one
FWIW I have two(!!) close friends working for Anthropic, one for nearly two years and one for about 4 months.
Both of them tell me that this is not just marketing, that the company actually is ethical and safety conscious everywhere, and that this was the most surprising part about joining Anthropic for them. They insist the culture is actually genuine which is practically unicorn rarity in corporate America.
We have worked for FAANG so I know where they're coming from; this got me to drop my cynicism for once and I plan on interviewing with them soon. Hopefully I can answer this question for myself.
Yeah, every engineer in the bay area has a way of framing the business they work for as a benign force for good... Until they find themselves working somewhere else, then suddenly they have a lot to say about the unacceptable things going on there.
From the outside, I find Anthropic's hyperbolic marketing to be an indication that they are basically the same as every other bay area tech startup - more or less nice folks who are primarily concerned with money and status. That's not a condemnation, but I reject all the "do no evil" fanfare as conveniently self serving.
My model is that Anthropic was founded by OpenAI engineers who self-selected for safety-consciousness. However, it's still subject to the same problem: power corrupts. I think they are better than OpenAI but they are definitely sliding.
> every engineer in the bay area has a way of framing the business they work for as a benign force for good
This isn't remotely true in my experience. The senior folks I know at Meta, for example, pretty much concede they're ersatz drug dealers.
Indeed. The bad behavior is emergent, where most individual intentions are good. Good story, bad outcome.
TBH I have worked at multiple FAANG and I don't know anyone other than maybe new grads that actually drank the koolaid.
Certainly most of us know we are just in it for the money, and the soul-grinding profit machine will continue to grind souls for profit regardless of what we want.
So that's why it is surprising to me when my (fairly senior) grizzled ex-FAANG friends, that share the same view, start waxing poetic about Anthropic being different and genuine. I think "maybe it is" and decide to interview. IDK, I guess some part of me wants to believe that nice things can exist.
I find it bizarre even the public image of Anthropic is seen as ethical after the Department of War debacle, in which they themselves admitted they had basically no qualms with their tech being used for war and slaughter at all except two very very thin lines, namely mass surveillance of American citizens and fully automated weaponry with their current models.
It only showed they were marginally more ethical than OpenAI and XAI which isn't saying much.
Anthropic has two principles they're willing to stand behind, even when it costs them. That's not a lot, but OpenAI only has one principle: look out for number one.
If you know even the basics of ethics then such claims are clearly nonsense. There is no stable context independent ethical behaviour. This is a great example of the dangers of motivated reasoning.
I have multiple friends at Anthropic. I can second this. One thing I notice about Anthropic culture is that it is unusually kind.
So much so that I worry they won't be Machiavellian enough to survive. Hope I am wrong.
I think cynicism is deserved just from observing Dario's remarks.
[flagged]
It might stick tbh. Their PBC+LTBT structure severely limits the power of shareholders. https://www.anthropic.com/news/the-long-term-benefit-trust