← Back to context

Comment by Nition

3 days ago

Let's say Anthropic refuses to do this. What actually happens next?

Or lets say they refuse and the government comes against them hard in some way, and Anthropic still really doesn't want to do it, so they just dissolve the entire company. Is that a potential way out, at least?

I mean, I realise they'd be losing billions by doing that and putting thousands out of work, but given that unaligned military AI could destroy the world...

Seems like the two main threats are Defense Production Act and Supply Chain Risk. I'd assume Anthropic would sue if either were invoked. I could imagine Supply Chain Risk being easier to push back on because it's pretty clearly being used punitively rather than because of an actual risk. DPA might be a bit harder to push back on if the banned functionality (i.e. mass surveillance and autonomous weapons) exists in the LLM itself and it's just a matter of disabling external checks. If the banned functionality is baked into the training data/weights directly they could probably push back on the DPA by saying the functionality isn't something they can reasonably create.

Only other precedent I can think of in the case where pushback fails is Lavabit with Edward Snowden's email, but I feel like Anthropic is too big to "fail" in the same way Lavabit did to avoid complying. The penalty for refusing to comply with the Defense Production act is $10k and/or a year in prison, but I think if the government actually pursued that they would burn a bunch of bridges and Amodei would be a folk hero.

  • I'm wondering exactly how they expect the DPA to help them with what is essentially a SaaS product. It's still going to refuse to do things it refuses to do.

    • My thought was that if the refusal to service some requests is implemented as an external guard model The Pentagon could try to require them to drop the guard model. This would be similar to saying "we're asking for a 'product' you already 'manufacture'" in the way the DPA is often understood. But if the refusal is baked into the model itself then that argument is dead. Not saying I agree with this, I think it turns into the same kind of problem we saw with the Apple v. FBI conflict and the All Writs Act, but the government doesn't always act in the most sane ways.