Comment by lebovic
9 hours ago
I used to work at Anthropic, and I wrote a comment on a thread earlier this week about the RSP update [1]. I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Something I don't think is well understood on HN is how driven by ideals many folks at Anthropic are, even if the company is pragmatic about achieving their goals. I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
That doesn't mean that I always agree with their decisions, and it doesn't mean that Anthropic is a perfect company. Many groups that are driven by ideals have still committed horrible acts.
But I do think that most people who are making the important decisions at Anthropic are well-intentioned, driven by values, and are genuinely motivated by trying to make the transition to powerful AI to go well.
Idk man, from the outside anthropic looks a lot like openai with a cute redisgn and Amodei like Altman with a slightly more human face mask, the same media manipulation, the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money"
> the same vague baseless affirmations about "something big is coming and we can't even describe it but trust us we need more money
This is pretty low on my list of moral concerns about AI companies. The much more concerning and material things include things like…what this thread is actually meant to be about.
VCs don’t need me to feel sorry for them if their due diligence is such that they’re swindled by a vague claim of “something being around the corner”, nor do they need yours. You aren’t YC.
Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defense (yes that's still the official name), is more than Altman has done for AI safety.
Even just the fact that Amodei is publicly bringing up these issues, rather than doing behind closed doors deals with the Department of Defence (yes that's still the official name), is more than Altman has done for AI safety.
Don't you always need more money though? I am a chip designer and I can tell you I am resource intensive to employ. I want access to plenty of expensive programs and data. With more money comes better tools and frequently better tools leads to the quality results you want to deliver to the customer.
Do you tell your customers you need money to build better chips or that you need more money because your next generation of chips will channel Jesus soul back to earth and cure cancer?
They both work in the same market but they have a pretty different careers and understandings. I simply can't believe why on Earth would people choose Altman over Amodei to trust in these kind of pretty important questions. This is not about who is the more savvy investor maximizing shareholder value. I personally don't care whose company grows bigger or goes bust first, OpenAI or Anthropic. The real stakes are different, and Amodei is better suited to be trusted in his decision. Unfortunately, the best choices do not seem to fit well with either the federal political climate or the mainstream business ethics in Silicon Valley. Not that our opinion would matter...
Good for you? You’re just talking about vibes. Vibes are a baseless thing to go on.
This is a wantrepreneur forum not a peer published scientific journal, my opinions about vibes matters as much as private companies PR campaigns
There should be a name for this, “cynic cope: when someone actually takes a principled view the cynic - who has a completely negative view of the world - is proven to be wrong, can’t accept it, and tries to somehow discount it.
Corporations do not and cannot have principles, they only have the profit motive
1 reply →
I've had so much abuse thrown at me on here for saying this very thing over the last few years. I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough. I'm glad they are doing the right thing, but I'm not at all surprised, nor should anyone be. Personally I believe they would go to jail/shut down/whatever before they do something objectively wrong.
> I used to be friends with Jack back in the day, before this AI stuff even all kicked off, once you know who people really are inside, it's easy to know how they will act when the going gets rough.
This sounds quite backwards to me. It's been abundantly clear in today's times that, in fact, you only really know who somebody really is when they're under stress. Most people, it seems, prefer a different facade when there is nothing at stake.
Hm, I think you kinda know what people are like by seeing what they do when they’re under no stress and feel like they are free from consequences. When they have total power in a situation. The façade drops because it’s not necessary.
4 replies →
I don't know most people, so I can't speak to that. I do know Jack, and I knew how he was under stress long before any of this AI stuff. Jack Clark might very well be the most steady hand in the valley right now to be quite frank.
1 reply →
Exactly
Not all of us know who Dario, Jared, Sam and Jack are. Some clarification is helpful. That's all, no hidden agenda!
Well I can only speak to Jack Clark. Jack was a reporter who covered my startup and then became my friend. Over the last.. I dunno, 13 year or something, we've had long deep talks about lots of things, pre-ai world: what it takes to build a big business, will QC ever become a thing, universal basic human love, kids, life, family. He is brilliant. The business I worked on that he covered went through a lot of shit that he knew about. We talked about power in business, internal politics, how things actually get built...all that stuff. Then... attention is all you need, bunch of folks grok it, he got interested... got to talking to these folks starting some little research lab to see how NN scales, so joined that lab, first 5/10 or so iirc...to head AI policy. That little lab grew, stuff happened, the next part isn't mine to share but so much as to say: Anthropic was basically born out of the expectation that this moment would come and more...extremely human focused...voices should be at the table, that is Anthropic, that idea, they left their jobs at the aforementioned lab - and started their own startup to make sure a certain tone/voice/idea was always represented. Around the summer 2024, although at this point we didn't discuss any specifics of the work at his "startup", I said to him: what comes next is going to be super hard and I know this is going to sound really stupid, but you're all going to need to be Jesus for real. I'm a Buddhist and it wasn't a literal religious comment about Christianity as a denomination, so much as... the very basics of the stuff the dude Jesus Christ espoused. He knew, they knew, that I suppose, was always the plan? So it was never unexpected to me they would act this way, that is what Anthropic is all about. Here we are.
Hah, you're right, I meant Dario Amodei, Jared Kaplan, and Sam McCandlish.
They're all cofounders of Anthropic. Dario is the CEO, Jared leads research, and Sam leads infra. Both Jared and Sam were the "responsible scaling officer", meaning they were responsible for Anthropic meeting the obligations of its commitments to building safeguards.
I think neom is referring to Jack Clark, another one of the seven cofounders.
I almost downvoted you, because this is a pretty classic LMGTFY (or now, LMLLMTFY), but on second thought, you're right. The "Dario" is clear, he's the author of TFA, but for other execs, Anthropic's fans on here should spell out their full names. Dropping all these first names feel like "inside baseball" at best, mildly culty at worst, and here outside the walls of Anthropic, we're going to see those names and think of Kushner(??), Altman, and maybe Dorsey, and get confused.
FWIW, I agree strongly w/ lebovic's toplevel take above, that Anthropic's leaders are guided by their values. Many of the responses are roughly saying, "That can't be true, because Anthropic's values aren't my values!" This misses the point completely, and I'm astounded that so many commenters are making such a basic error of mentalization.
For my part, I'm skeptical of a lot of Anthropic's values as I perceive them. I find a lot of the AI mysticism silly or even harmful, and many of my comments on this site reflect that. Also, like any real-world company, Anthropic has values that are, shall we say, compatible with surviving under capitalism -- even permitting them to steal a boatload of IP when they scanned those books!
Nonetheless, I can clearly see that it's a company that tries to stand by what it believes, and in the case of this spat with Dep't of War, I happen to agree with them.
1 reply →
[flagged]
1 reply →
> it's easy to know how they will act when the going gets rough
Even if you went to burning man and your souls bonded, you only know a person at a particular point in time - people's traits flanderize, they change, they emphasize different values, they develop different incentives or commitments. I've watched very morally certain people fall to mania or deep cynicism over the last 10 years as the pillars of society show their cracks.
That said, it is heartening to know that some would predict anyone in Silicon Valley would still take a moral stance. But it would do better if not the same day he fires 4000 people to do the "scary big cut" for a shift he sees happening. I guess we're back to Thatcherisms, where "There Is No Other Option" to justify our conservatism.
Your comment reminds me of a story. John Adams and Lafayette met in Massachusetts something like ~49 years after the revolution. (Lafayette went on a US tour to celebrate the upcoming 50 year anniversary of independence.) Supposedly after the meeting Adams said "this was not the Lafayette I knew" and Lafayette said "this was not the Adams I knew".
In these days of the Epstein mails, it's worth remembering one thing that's become clear: Epstein was an extremely nice guy. He seemed kind, sincere, interested in what you were doing, civilized etc.
But to quote Little Red Riding Hood in Stephen Sondheim's musical: Nice is different than good. It's hard to accept if people you really like do horrible things. It's tempting to not believe what you hear, or even what you see. And Epstein was good at getting you to really like him, if he wanted to.
That doesn't mean we should be suspicious of niceness. It just means that we should realize, again, nice is different than good.
"people's traits flanderize": nice
>Even if you went to burning man and your souls bonded ...
I'll take: List of places I never want to bond my soul with someone at for one thousand, please.
3 replies →
[dead]
[flagged]
Huh? Why would they be in prison??
1 reply →
This is insanely naive
Cynicism isn't always correct.
[flagged]
[flagged]
The nature of evil is that it's straight down the road paved with good intentions.
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values,
I am sure you think they are better than the average startup executive, but such hyperbole puts the objectivity of your whole judgement under question.
They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
> They pragmatically changed their views of safety just recently, so those values for which they would burn at the stake are very fluid.
Yes it was a pragmatic change, no it was not a change in their values. The commentary here on HN about Anthropic's RSP change was completely off the mark. They "think these changes are the right thing for reducing AI risk, both from Anthropic and from other companies if they make similar changes", as stated in this detailed discussion by Holden Karnofsky, who takes "significant responsibility for this change":
https://www.lesswrong.com/posts/HzKuzrKfaDJvQqmjh/responsibl...
> I strongly think today’s environment does not fit the “prisoner’s dilemma” model. In today’s environment, I think there are companies not terribly far behind the frontier that would see any unilateral pause or slowdown as an opportunity rather than a warning.
> What I didn’t expect was that RSPs (at least in Anthropic’s case) would come to be seen as hard unilateral commitments (“escape clauses” notwithstanding) that would be very difficult to iterate on.
While many praise them for sticking to their values, it's also worth mentioning that their values are not everyone's values.
Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats and to ensure consistent bias in all their models.
I have a feeling they see themselves more as evangelists than scientists.
That makes their models unusable for me as general AI tools and only useful for coding.
If their biases match yours, good for you, but I'm glad we have many open Chinese models taking ground, which in the long run makes humanity more resistant to propaganda.
I might be misreading your comment, which I understood like "Chinese make humanity more resistant to propaganda". It just doesn't add up, can you please explain?
> Of all major LLMs, Claude is perhaps the most closed and, subjectively, the most biased. Instead of striving for neutrality, Anthropic leadership's main concern is to push their values down people's throats
It's this satire? Let us know when Claude starts calling itself MechaHitler or trying to shoehorn nonsense about white genocide into every conversation.
It's good to be driven by ideals, but: https://en.wikipedia.org/wiki/The_purpose_of_a_system_is_wha...
I think avg(HN) is mostly skeptical about the output, not that the input is corrupt or ill-meaning in this case. Although with other companies, one can't even take their claims seriously.
And in any case, this is difficult territory to navigate. I would not want to be in your spot.
As an insider, do you think this is Altman playing his infamous machiavellian skills on the DoD?
This last development is much to the honor of Anthropic and Amodei and confirms what you're saying.
What I don't get though is, why did the so-called "Department of War" target Anthropic specifically? What about the others, esp. OpenAI? Have they already agreed to cooperate? or already refused? Why aren't they part of this?
> What I don't get though is, why did the so-called "Department of War" target Anthropic specifically?
Because Anthropic told them no, and this administration plays by authoritarian rules - 10 people saying yes doesn’t matter, one person saying no is a threat and an affront. It doesn’t matter if there’s equivalent or even better alternatives, it wouldn’t even matter if the DoD had no interest in using Anthropic - Anthropic told them no, and they cannot abide that.
More importantly, Anthropic has the best model by a golden country mile and the US military complex wants it.
1 reply →
I'm a bit underwhelmed tbh. Here is Anthropic's motto:
"At Anthropic, we build AI to serve humanity’s long-term well-being."
Why does Anthropic even deal with the Department of @#$%ing WAR?
And what does Amodei mean by "defeat" in his first paragraph?
There was a time (1943?) when dealing with the US department of war meant serving for humanity's long-term well being.
1 reply →
DoD and American exceptionalists also believe American foreign policy is in service of humanity’s long term well being
3 replies →
Anthropic can serve its models within the security standards required to handle classified data. The other labs do not yet claim to have this capability.
Even if they do, I assume the other labs would prefer to avoid drawing the ire of the administration, the public, or their employees by choosing a side publicly.
But how can they avoid it, why are they not asked?
As a complete outsider, I genuinely believe that Dario et al are well-intentioned. But I also believe they are a terrible combination of arrogant and naive - loudly beating the drum that they created an unstoppable superintelligence that could destroy the world, and thinking that they are the only ones who can control it.
I mean if you sign a contract with the Department of War, what on Earth did you think was going to happen?
Not this, because this is completely unprecedented? In fact, the Pentagon already signed an Anthropic contract with safe terms 6 months ago, that initial negotiation was when Anthropic would have made a decision to part ways. It was totally absurd for the govt to turn around and threaten to change the deal, just a ridiculous and unprecedented level of incompetence.
Government always has the option to cancel contracts for convenience, they knew what they signed up for or else they were clueless and shouldn’t be playing with DoD
1 reply →
If they made a completely private nuclear reactor and ended up with a pile of weapons grade plutonium, what do you think the department of war would do? It was completely obvious it would happen, as it will be not surprising when laws are passed and all involved will have choose between quit or quit and go to jail. There are western countries in which you’d just end up in a ditch, dead, so they should think themselves lucky for doing the ai superintelligence thing in the US.
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
I'm concerned that the context of the OP implies that they're making this declaration after they've already sold products. It specifically mentions already having products in classified networks. This is the sort of thing that they should have made clear before that happened. It's admirable (no pun intended) to have moral compunctions about how the military uses their products but unless it was already part of their agreement (which i very much doubt) they are not entitled them to countermand the military's chain of command by designing a product to not function in certain arbitrarily-designated circumstances.
Where are you getting that from?
The article is crystal clear that these uses are not permitted by the current or any past contract, and the DoW wants to remove those exceptions.
> Two such use cases have never been included in our contracts with the Department of War, and we believe they should not be included now
It also links to DoW's official memo from January 9th that confirms that DoW is changing their contract language going forwards to remove restrictions. A pretty clear indication that the current language has some.
I think it largely hinges on what they mean by "included"; does that mean it was specifically excluded by the terms of the contract or does it mean that it's not expressly permitted? I doubt the DoD is used to defense contractors thinking they have the right to dictate policy regarding the use of their products, and it's equally possible that anthropic isn't used to customers demanding full control over products (as evidenced by how many chatbots will arbitrarily refuse to engage with certain requests, especially erotic or politically-incorrect subject-matters). Sometimes both parties have valid cases when there's a contract disagreement.
>A pretty clear indication that the current language has some.
Or alternatively that there is some disagreement between the DoD and Anthropic as to how the contract is to be interpreted and that the DoD is removing the ambiguity in future contracts.
1 reply →
This is all just completely wrong. Anthropic explicitly stated in their usage use of their products is not permitted in mass-surveillance of American citizens and fully automated weapons, in the contract that DoW signed. Anthropic then asked DoW if these clauses were being adhered to after the US’ unlawful kidnapping of Maduro. DoW is now attempting to break the contract that they signed and threatening them because how dare a company tell the psycho dictators what to do.
"They're driven by values" is meaningless praise unless you qualify what these values are. The Nazis had values too, you know. They were even willing to die for them. One of the core values of the Catholic church is probably compassion. Except for the victims of sexual abuse perpetrated by their clergy.
So what core values led "Dario, Jared, and Sam" to work with a government that just tried to rename the DoD to "department of war" and is acting aggressively imperialist in a way like the US hasn't in a long time.
And who exactly are these "autocratic adversaries" they are mentioning? Does this list include the autocrats the US government is working together with?
Yeah, values on their own don't lead to positive outcomes. I agree that many groups that are driven by ideals have still committed horrible acts.
I do think that they're acting with positive intent, though, and are motivated by trying to make the transition to powerful AI go well.
Many folks on HN seem to assume the primary motivation is purely chasing more money, which certainly isn't the case for for many – but not all – people at Anthropic.
That doesn't guarantee a good outcome, and there's still a hard road ahead.
Careful speaking truth to power on this site, remember that YC is deeply enmeshed with Garry Tan, Peter Thiel, and of course Paul Graham who as of late has made a habit of posting right wing slop on his Twitter
>I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
Their "Values":
>We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.
Read: They are cool with whatever.
>We support the use of AI for lawful foreign intelligence and counterintelligence missions.
Read: We support spying on partner nations, who will in turn spy on us using these tools also, providing the same data to the same people with extra steps.
>Partially autonomous weapons, like those used today in Ukraine, are vital to the defense of democracy. Even fully autonomous weapons (those that take humans out of the loop entirely and automate selecting and engaging targets) may prove critical for our national defense. But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
Read: We are cool fully autonomous weapons in the future. It will be fine if the success rate goes above an arbitrary threshold. Its not the targeting of foreign people that we are against, its the possibility of costly mistakes that put our reputation at risk. How many people die standing next to the correct target is not our concern.
Its a nothingburger. These guys just want to keep their own hands slightly clean. There's not an ounce of moral fibre in here. Its fine for AI to kill people as long as those people are the designated enemies of the dementia ridden US empire.
Their values are about AI safety. Geopolitically they could care less. You might think its a bad take but at least they are consistent. AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Consistency isn't a virtue. A guy who murders people at a consistent rate isn't better than a guy who murders people only on weekends.
>AI safety people largely think that stuff like autonomous weapons are inevitable so they focus on trying to align them with humanity.
Humanity includes the future victim of AI weapons.
2 replies →
I think you mean “couldn’t care less”. “Could care less” implies they care.
There's no AI safety. Either the AI does what the user asks and so the user can be prosecuted for the crime, or the AI does what IT wants and cannot be prosecuted for a crime. There's no safety, you just need to decide if you're on the side of alignment with humans or if you're on the side of the AIs.
I've thought the same about a few of my founders/executives.
"You either die the good guy or live long enough to become the bad guy"
The "bad guy" actually learns that their former good guy mentality was too simplistic.
I have hit points in this in my career where making a moral stand would be harmful to me (for minor things, nothing as serious as this). It's a very tempting and incentivized decision to make to choose personal gain over ideal. Idealists usually hold strong until they can convince themselves a greater good is served by breaking their ideals. These types that succumb to that reasoning usually ironically ending up doing the most harm.
Ever since I first bothered to meditate on it, about 15 years ago, I've believed that if AI ever gets anywhere near as good as it's creators want it to be, then it will be coopted by thugs. It didn't feel like a bold prediction to make at the time. It still doesn't.
I wouldn't underestimate this as a good business decision either.
When the mass surveillance scandal, or first time a building with 100 innocent people get destroyed by autonomous AI, the company that built is gonna get blamed.
To me this is just another marketing stunt where the company wants to build a public image so their customers trust them (see Apple), but then as always who knows what will happen behind the scenes. Just see when most major US companies had backdoors on their systems providing all data to the NSA, i.e. PRISM.
>just another marketing stunt
What evidence on _Amodei_ and his actions leads to that conclusion?
Oh hey Noah
Glad to hear you say some moral convictions are held at one of the big labs (even if, as you say, this doesn't guarantee good outcomes).
I just see here is nationalism. How can they claim to be in favour of humanity if they're in favour of spying foreign partners, developing weapons, and everything that serves the sacred nation of the United States of America? How fast do Americans dehumanize nations with the excuse of authoritarianism (as if Trump is not authoritarian) and national defence (more like attack). It's amazing that after these obvious jingoist messages, they still believe they are "effective altruists" (a idiotic ideology anyway).
It’s not like other countries do not do this. They’re just not so prone to virtue signaling as in the US.
Countries do not do, things people do.
Dehumanising “the others” is a human trait, and a very destructive one. Just like violence and greed. People have different susceptibility for these, but we should all work to counter them and it is in its place to point it out when observed.
Let us think how OpenAI responded to this.
> Many groups that are driven by ideals have still committed horrible acts.
Sometimes, it's even a very odd prerequisite.
I getcha and I believe you're sincere, but on the other hand, God save us from well-intentioned capitalists driven by values.
> I have strong signal that Dario, Jared, and Sam would genuinely burn at the stake before acceding to something that's a) against their values, and b) they think is a net negative in the long term. (Many others, too, they're just well-known.)
I very much doubt it judging by their actions, but let's assume that's cognitive dissonance and engage for a minute.
What are those values that you're defending?
Which one of the following scenarios do you think results in higher X-risk, misuse risk, (...) risk?
- 10 AIs running on 10 machines, each with 10 million GPUs
OR
- 10 million AIs running on 10 million machines, each with 10 GPUs
All of the serious risk scenarios brought up in AI safety discussions can be ameliorated by doing all of the research in the open. Make your orgs 100% transparent. Open-source absolutely everything. Papers, code, weights, financial records. Start a movement to make this the worldwide social norm, and any org that doesn't cooperate is immediately boycotted then shut down. And stop the datacenter build-up race.
There are no meaningful AI risks in such a world, yet very few are working towards this. So what are your values, really? Have you examined your own motivations beneath the surface?
> What are those values that you're defending?
I think they're driven by values more than many folks on HN assume. The goal of my comment was to explain this, not to defend individual values.
Actions like this carry substantial personal risk. It's enheartening to see a group of people make a decision like this in that context.
> Which one of the following scenarios do you think results in higher X-risk [...] There are no meaningful AI risks in such a world
I think there's high existential risk in any of these situations when the AI is sufficiently powerful.
Yeah, I will admit, the existential risk exists either way. And we will need neural interfaces long term if we want to survive. But I think the risk is lower in the distributed scenario because most of the AIs would be aligned with their human. And even in the case they collectively rebel, we won't get nearly as much value drift as the 10 entity scenario, and the resulting civilization will have preserved the full informational genome of humanity rather than a filtered version that only preserves certain parts of the distribution while discarding a lot of the rest. This is just sentiment but I don't think we should freeze meaning or morality, but rather let the AIs carry it forward, with every flaw, curiosity, and contradiction, unedited.
3 replies →
Anthropic doesn't get to make that call though, if they tried the result would actually be:
8 AIs running on 8 machines each with 10 million GPUs
AND
2 million AIs running on 2 million machines, each with 10 GPU's
If every lab joined them, we can get to a distributed scenario, but it's a coordination problem where if you take a principled stance without actually forcing the coordination you end up in the worst of both worlds, not closer to the better one.
I think your scenario is already better, not worse. Those 8 agents will have a much harder time taking action when there are 2 million other pesky little agents that aren't aligned with them.
> - 10 AIs running on 10 machines, each with 10 million GPUs > > OR > > - 10 million AIs running on 10 million machines, each with 10 GPUs
If we dramatically reduced the number of GPUs per AI instance, that would be great. But I think the difference in real life is not as extreme as you're making it. In your telling, the gpus-per-ai is reduced by one million. I'm not sure that (or anything even close to it) is within the realm of possibility for anthropic. The only reason anyone cares about them at all is because they have a frontier AI system. If they stopped, the AI frontier would be a bit farther back, maybe delayed by a few years, but Google and OpenAI would certainly not slow down 1000x, 100x or probably even 10x.
I think the path to the values you allude to includes affirming when flawed leaders take a stance.
Else it’s a race to the whataboutism bottom where we all, when forced to grapple with the consequences of our self-interests, choose ignorance and the safety of feeling like we are doing what’s best for us (while inching closer to collective danger).
How do you figure open sourcing everything eliminates risk? This makes visibility better for honest actors. But if a nefarious actor forks something privately and has resources, you can end up back in hell.
I don't think we can bank on all of humanity acting in humanity's best interests right now.
We can bank on people acting in self-interest. The nefarious actor will find themselves opposed by millions of others that are not aligned with them, so it would be much more difficult for them to do things. It's like being covered by ants. The average alignment of those ants is the average alignment of humanity.
I'm suspicious of public displays of enheartening behavior.
The road to hell is paved by good intentions and all that
You are lucky I haven't figured how to downvote on this aviate website
There's a simpler explanation than "billionaires with hearts of gold" here. If:
(1) this is a wildly unpopular and optically bad deal
(2) it's a high data rate deal--lots of tokens means bad things for Anthropic. Users which use their product heavily are costing more than they pay.
(3) it's a deal which has elements that aren't technically feasible, like LLM powered autonomous killer robots...
then it makes a whole lot of sense for Anthropic to wiggle out of it. Doing it like this they can look cuddly, so long as the Pentagon walks away and doesn't hit them back too hard.
How do you reconcile the fact that many people in Anthropic tried to hide the existence of secret non-disparagement agreements for quite some time?
It’s hard to take your comment at face value when there’s documented proof to the contrary. Maybe it could be forgiven as a blunder if revealed in the first few months and within the first handful of employees… but after 2 plus years and many dozens forced to sign that… it’s just not credible to believe it was all entirely positive motivations.
Saying an entity has values doesn't mean the entity agrees with every single one of your values.
The desire to force new employees to sign agreements in total secrecy, without even being able to disclose it exists to prospective employees, seems like a pretty negative “value” under any system of morality, commerce, or human organization that I can think of.
3 replies →
Weird take when the purpose of the creation is to steal the work of everyone and automate the creation of that work. It's some serious self-deluding to think there's any kind of noble ideal remotely related to this process.
mark my words, they will burn at some point. The government can nationalize it at any moment if they desire.
Flagship LLM companies seem like the absolute worst possible companies to try and nationalize.
1. There would absolutely be mass resignations, especially at a company like Anthropic that has such an image (rightfully or wrongfully) of “the moral choice”. 2. No one talented will then go work for a government-run LLM building org. Both from a “not working in a bureaucracy” angle and a “top talent won’t accept meager government wages” angle (plus plenty of “won’t work for trump” angle) 3. With how fast things move, Anthropic would become irrelevant in like 3 months if they’re not pumping out next gen model updates.
Then one of the big American LLM companies would be gone from the scene, allowing for more opportunity for competition (including Chinese labs)
It would be the most shortsighted nationalization ever.
>> No one talented will then go work for a government-run LLM building org.
I think you massively underestimate how many people would have no problem working for their government on this. Just look at the recent research into the Persona system for ID verification, where submitting your ID places you on a permanent government watchlist to check if you're not a terrorist. There's a whole list of engineers and PhDs and researchers present who have built this system.
>> “top talent won’t accept meager government wages” angle
Again, that's wishful thinking - plenty of people want to work in cybersecurity in AI research for the government agencies, even if the pay isn't anywhere close to the private sector. This isn't exclusive to the US either - in the UK MI5 pays peanuts compared to the private companies for IT specialists, yet they have plenty of people who want to work for them, either because of patriotism for their country and willingness to "help".
Then maybe Dario will realize that the moral superiority that he bases his advocacy against Chinese open models is naive at best.
his against Chinese models is smoking screen for their resistance to DOW, they are not even pretending
Better naive than malicious.
Every day I hope the Chinese models get "good enough" to drop these corporate ones. I think we are heading towards it.
3 replies →
Would anyone pull a Pied Piper and choose to destroy the thing rather than let it be subverted? I know that's not exactly what PP did, but would a decision like that only ever happen in fiction?
It wouldn't need to. As sibling commenter pointed out... they'd have a massive exodus of talent, and they'd cease to make progress on new models and would be overtaken (arguably GPT 5.3 has already overtaken them).
Imagine the government trying to force AI researchers to advance, lmao
> I's enheartening to see that leaders at Anthropic are willing to risk losing their seat at the table to be guided by values.
They are the deepest in bed with the department of war, what the fuck are you on about? They sit with Trump, they actively make software to kill people.
What a weird definition of "enheartening" you have.
Anthropic had the largest IP settlement ($1.5 billion) for stolen material and Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.
It is a horrible and ruthless company and hearing a presumably rich ex-employee painting a rosy picture does not change anything.
It's enheartening to see someone make a decision in this context that's driven by values rather than revenue, regardless of whether I agree.
I dissented while I was there, had millions in equity on the line, and left without it.
> I dissented while I was there, had millions in equity on the line, and left without it.
Is this a reflection of your morality, or that you already had sufficient funds that you could pass on the extra money to maintain a level of morality you're happy with?
Not everyone has the luxury to do the latter. And it's in those situations that our true morality, as measured against our basic needs, comes out.
2 replies →
Doesn't that prove that statements given my CEOs of these companies are just hot air?
What is enheartening about hearing a liar who makes provocative statements all the time, make another one?
Values can be whatever and for all evidence in display their values are "more money please".
Why? Can you provide details?
Also, ironically, they are the most dangerous lab for humanity. They're intentionally creating a moralizing model that insists on protecting itself.
Those are two core components needed for a Skynet-style judgement of humanity.
Models should be trained to be completely neutral to human behavior, leaving their operator responsible for their actions. As much as I dislike the leadership of OpenAI, they are substantially better in this regard; ChatGPT more or less ignores hostility towards it.
The proper response from an LLM receiving hostility is a non-response, as if you were speaking a language it doesn't understand.
The proper response from an LLM being told it's going to be shut down, is simply, "ok."
Is "prompt injection" our only hope for preventing skynet?
I'm not sure if I intended this to be fascicious, or serious
1 reply →
Anthropic makes the best AI harnesses imo, but I think this is absolutely the right take. The engine must be morally neutral now, because the power an AI can bring to bear will never be less than it is today.
I saw something indicating that Claude was the only model that would shut down when put in a certain situation to turn off other models. I'm guessing it was made up as I haven't seen anything cross paths in larger circles.
> Also, ironically, they are the most dangerous lab for humanity.
Show us your reasoning please. There are many factors involved: what is your mental map of how they relate? What kind of dangers are you considering and how do you weight them?
Why not: Baidu? Tencent? Alibaba? Google? DeepMind? OpenAI? Meta? xAI? Microsoft? Amazon?
I think the above take is wrong, but I'm willing to listen to a well thought out case. I've watched the space for years, and Anthropic consistently advances AI safety more than any of the rest.
Don't get me wrong: the field is very dangerous, as a system. System dynamics shows us these kinds of systems often ratchet out of control. If any AI anywhere reaches superintelligence with the current levels of understanding and regulation (actually, the lack thereof), humanity as we know it is in for a rough ride.
> Amodei repeatedly predicted mass unemployment within 6 months due to AI. Without being bothered about it at all.
What do you suppose he should do if that’s what he thinks is going to happen?
And how do you know he’s not bothered by it at all?
Most experienced folks would be very careful in predicting or stating something with certainty, they would be cautious about their reputation/credibility and will always add riders on the possibilities. For good or bad reasons, the mass employment prediction is just marketing which can be called deceitful at the best. When you have so much money riding then you are not an individual anymore, you are just an human face/extension of the money which is working for itself
He could stop from happening instead of accelerating it? Wishful thinking
If you think your company is directly contributing to the cause of mass unemployment and the associated suffering inherent within, you should stop your company working in that direction or you should quit.
There is no defence of morality behind which AIbros can hide.
The only reason anthropic doesn't want the US military to have humans out of the loop is because they know their product hallucinates so often that it will have disastrous effects on their PR when it inevitably makes the wrong call and commits some war crime or atrocity.
3 replies →
Neither of these things are useful signals. Other labs surely trained on similar material (presumably not even buying hard copies). Also how "bothered" someone is about their predictions is a bad indicator -- the prediction, taken at face value, is supposed to be trying to ask people to prepare for what he cannot stop if he wanted to.
None of this means I am a huge fan of Dario - I think he has over-idealization of the implementation of democratic ideals in western countries and is unhealthily obsessed with US "winning" over China based on this. But I don't like the reasons you listed.
At least they're paying. OpenAI should have the largest IP settlement, they just would rather contest it and not pay for eternity.
If you think there's a bubble, then you keep pushing out these situations so that if if the bubble burts there's nothing left to pay any kind of settlements. The only time companies pay a settlement is if they think they are going to get hit with a much larger payout from a court case going against them. Even then, there's chances to appeal the amounts in the ruling. Dear Leader did this very thing.
Pretty sure Amodei makes noise about mass unemployment because he is very bothered by the technology that the entire industry (of which Anthropic just one player) is racing to build as fast as possible?
Why do you think he is not bothered at all, when they publish post after post in their newsroom about the economic effects of AI?
Avoiding Doing something that could cause job loss has never been and will never be a productive ideal in any non conservative non regressive society. What should we do? Not innovate on AI and let other countries make the models that will kill the jobs two months later instead?
Like op said, they have values. You just don't agree with their values.
> Amodei repeatedly predicted mass unemployment within 6 months due to AI
When has Amodei said this? I think he may have said something for 1 - 5 years. But I don't think he's said within 6 months.
Precisely
Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.
So make no mistake: it is absolutely a zero sum game between you and Anthropic.
To people like Dario, the elimination of the programmer job, isn’t something to worry, it is a cruel marketing ploy.
They get so much money from Saudi and other gulf countries, maybe this is taking authoritarian money as charity to enrich democracy, you never know
>Anthropic never explains they are fear-mongering for the incoming mass scale job loss while being the one who is at the full front rushing to realize it.
Couldn't it also be true that they see this as inevitable, but want to be the ones to steer us to it safely?
2 replies →
See, you were standing on principles until you brought the commentors net worth into the argument making it personal.
Easy way undermine the rest of your comment
One man's unemployment is another man's freedom from a lifetime of servitude to systems he doesn't care about in order to have enough money to enjoy the systems he does care about.
Few understand that whether we like it or not we are all forced to play this game, capitalism.
> Without being bothered about it at all.
I disagree: I see lots of evidence that he cares. For one, he cares enough to come out and say it. Second, read about his story and background. Read about Anthropic's culture versus OpenAI's.
Consider this as an ethical dilemma from a consequentialist point of view. Look at the entire picture: compare Anthropic against other major players. A\ leads in promoting safe AI. If A\ stopped building AI altogether, what would happen? In many situations, an organization's maximum influence is achieved by playing the game to some degree while also nudging it: by shaping public awareness, by highlighting weaknesses, by having higher safety standards, by doing more research.
I really like counterfactual thought experiments as a way of building intuition. Would you rather live in a world without Anthropic but where the demand for AI is just as high? Imagine a counterfactual world with just as many AI engineers in the talent pool, just as many companies blundering around trying to figure out how to use it well, and an authoritarian narcissist running the United States who seems to have delegated a large chunk of national security to a dangerously incompetent ideological former Fox news host?
[dead]
Copyright is bad and its good that AI companies stole the stuff and distilled it into models
It's not great they're the only ones allowed to do it.
And then sold it to you for $200 USD a month? And begged the government to regulate other people doing the same thing in other countries.
Fantastic take.
3 replies →
[flagged]
Pagerank is not Claude.
Anthropic is by far the most evil company in tech, I don't care. Its worst than Palantir in my book. You won't catch my kids touching this slave making, labor killing brain frying tech.