← Back to context

Comment by lukewrites

1 day ago

I admire Anthropic for sticking to their principles, even if it affects the bottom line. That’s the kind of company you want to work for.

It's also a very clear differentiator for them relative to Google, Facebook, and OpenAI, all of whom are clearly varying degrees of willing to sell themselves out for evil purposes.

Companies change (remember "don't be evil"?) but yeah for the Anthropic of today, respect.

The team that handles their PR has done an amazing job in the last 9 months

  • Hint: It's much easier to have good PR by being actually good. Though it does make people like this do the whole implication thing.

    • Ah, right, by being actually good, as in - being okay with mass surveillance as long as it isn't being done in the US, being okay with Claude assisting in killing people as long as it isn't fully autonomous, and being actively hostile to open-weight LLMs and open research on LLMs? This kind of "good"?

      No, OP is right, their PR department is doing a great job.

      7 replies →

  • Why? What has their PR department done? Most people are quite critical of a lot of their messaging, it's their actions that seem worth encouraging

  • [flagged]

    • It's funny, because even if they walk it back, they still would come out ahead in PR versus if they just rolled over. Because at that point, it would look like a hostage victim reading a statement that they are being treated well by their captors in front of a camera.

    • Do you think that bad things happening is just hilarious in general? Do you like to see good behavior punished? I'm really trying to understand what you get out of making this comment. Also what happens when ... This doesn't happen? You just polluted the epistemic commons a bit more with some cynical bullshit sans consequence? Enough. I think it's time to start calling this garbage out when I see it.

      2 replies →

This whole saga is extremely depressing and dystopic.

Anthropic is holding firm on incredibly weak red lines. No mass surveillance for Americans, ok for everyone else, and ok to automatic war machines, just not fully unmanned until they can guarantee a certain quality.

This should be a laughably spineless position. But under this administration it is taken as an affront to the president and results in the government lashing out.

  • We live in a timeline where you don’t have to have strong morals to be crushed. If you have any morals, you will be crushed.

If you're a billionaire there's no risk to "sticking to principles", so there's nothing to admire. Also that's not what they're doing. These are calculated moves in a negotiation and the trump regime only has 3 years left. Even a CEO can think 4 years ahead.

It's probably in Anthropic's interest to throw grok to these clowns and watch them fail to build anything with it for 3 years.

  • i disagree. 3 years is an insanely long time in the AI space. The entire industry pretty much didn't even exist three years ago! Or at least not within 4 orders of magnitude.

    Also, every other company has bent the knee and kissed the ring. And the trump admin will absolutely do everything they can to not appear weak and harm Anthropic. If it was so easy to act principled, don't you think other companies would've refused too? Eg Apple

    And there is real harm here. You're reading about it - they get labeled a supply chain risk. This is negative and very tangible

[flagged]

  • why does it need to be a completely different, trained model? AWS doesn't provide unique technologies in their goverment cloud, beyond isolation and firewalled access; Anthropic can do the same thing. Probably need to cough up enough to register a new domain name!

    • I can think of two reasons. One, to have the plausible deniability with the necessary future statement "Claude is not used by the DoD/DoW to conduct domestic mass surveillance or autonomous killing"; by having the model be properly a different from the one used by the public, they can wrangle over the language with technicalities and still avoid outright lying. (With their IPO in sight, let's keep in mind that everything is securities fraud.)

      And two, I suspect that some of the guardrails have been "baked in" to Anthropic's model. Much in the same way as the Chinese open-weight models have a strong bias against expressing positive sentiments about Tiananmen Square, Tank Man or Winnie the Pooh, the "Standard Claude" would likely have the fundamental product biases trained into it.

      Taken together it would therefore be both politically and financially sensible for Anthropic to create a separate, unrestricted[tm] almost-Claude for the morally unconstrained military / intelligence purposes.

> 83 people in total killed in US attack to abduct President Nicolas Maduro

Blood is on their hands already

  • So much left unsaid. So much implied. Let’s make it explicit and talk about it. Here are some follow questions that reasonable people will ask:

    What was Anthropic’s role in the Maduro operation? (Or we can call it state-sponsored kidnapping.) Who knew what and when? Did A\ find itself in a position where it contradicted its core principles?

    More broadly, how does moral culpability work in complex situations like this?

    How much moral culpability gets attributed to a helicopter manufacturer used in the Maduro operation? (Assuming one was; you can see my meaning I hope.)

    P.S. Traditional programming is easy in comparison to morality.