Comment by bogzz
13 hours ago
We don't know how the military intended to use Claude, and neither do we know nor does the military know whether Claude without RLHF-imposed safety would have been more useful to them.
Ergo, this is a very convenient PR opportunity. The public assumes the worst, and this is egged on by Anthropic with the implication that CLAUDE is being used in autonomous weapons, which I find almost amusing.
He can now say goodbye to $200 million, and make up for it in positive publicity. Also, people will leave thinking that Claude is the best model, AND Anthropic are the heroes that staved off superintelligent killer robots for a while.
Even setting this aside, Dario is the silly guy who's "not sure whether Claude is sentient or not", who keeps using the UBI narrative to promote his product with the silent implication that LLMs actually ARE a path to AGI... Look, if you believe that, then that is where we differ, and I suppose that then the notion that Amodei is a moral man is comprehensible.
Oh, also the stealing. All the stealing. But he is not alone there by any means.
edit: to actually answer your question, this act in itself is not what prompted me to say that he is an immoral man. Your comment did.
> to promote his product with the silent implication that LLMs actually ARE a path to AGI
That isn't implied. The thought process is a) if we invent AGI through some other method, we should still treat LLMs nicely because it's a credible commitment we'll treat the AGI well and b) having evidence in the pretraining data and on the internet that we treat LLMs well makes it easier to align new ones when training them.
Anyway, your argument seems to be that it's unfair that he has the opportunity to do something moral in public because it makes him look moral?
His actions seem pretty consistent with a belief that AI will be significant and societally-changing in the future. You can disagree with that belief but it's different to him being a liar.
The $200m is not the risk here. They threatened labelling Anthropic as a supply chain risk, which would be genuinely damaging.
> The DoW is the largest employer in America, and a staggering number of companies have random subsidiaries that do work for it.
> All of those companies would now have faced this compliance nightmare. [to not use Anthropic in any of their business or suppliers]
... which would impact Anthropic's primary customer base (businesses). Even for those not directly affected, it adds uncertainty in the brand.