Comment by 6Az4Mj4D
7 hours ago
Leaving autonomous weapons aside, how does Anthropic justifies that they signed up with surveillance company Palantir and now raising concerns for same surveillance with DoD?
It doesn't match.
7 hours ago
Leaving autonomous weapons aside, how does Anthropic justifies that they signed up with surveillance company Palantir and now raising concerns for same surveillance with DoD?
It doesn't match.
This is very easy to explain. Anthropic outlines some limitations in their terms of service. Palantir accepted those terms. The DoD did not.
OpenAI claims their terms of service for DoD contain the same limitations as Anthropics proposed service agreement. Anthropic claims that this is untrue.
Now given that (a) the DoD terminated their deal with Anthropic, (b) stated that they terminated because Anthropic refused modify their terms of service, and (c) then signed a deal with openAI; I am inclined to believe that there is in fact a substantial difference between the terms of service offered by Anthropic and OpenAI.
Yeah, it never made sense when Sam immediately said that they had the same constraints yet de DoW immediately agreed with that.
From what I can see, OpenAI’s terms basically say “need to comply with the law”, which provides them with plenty of wiggle room with executive orders and whatnot.
I think they said they will comply with the law and Pentagon policies.
And:
1. there is no law currently prohibiting autonomous weapons platforms
2. the Pentagon can create policies overnight allowing all kinds of stuff
So yeah, OpenAI is going to make a lot of money from actually doing what the military asks from them.
1 reply →
Are you sure about that? Every information I’ve seen suggests that the DoD has been using Anthropic’s models through Palantir.
My understanding is that Anthropic requested visibility and a say into how their models were being used for classified tasks, while the DoD wanted to expand the scope of those tasks into areas that Anthropic found objectionable. Both of those proposals were unacceptable for the other side.
Wasn’t the trigger for all this what happened with Maduro earlier this year? From what I understood, Anthropic wasn’t very happy how their systems were being used by the DoW through Palentir which caused this whole feud.
7 replies →
“We’ve actually held our red lines with integrity rather than colluding with them to produce ‘safety theater’ for the benefit of employees (which, I absolutely swear to you, is what literally everyone at [the Pentagon], Palantir, our political consultants, etc, assumed was the problem we were trying to solve),” Amodei reportedly wrote.
“The real reasons [the Pentagon] and the Trump admin do not like us is that we haven’t donated to Trump (while OpenAI/Greg have donated a lot),” he wrote, referring to Greg Brockman, OpenAI’s president, who gave a Pac supporting Trump $25m in conjunction with his wife.
https://www.theguardian.com/technology/2026/mar/04/sam-altma...
> we haven’t donated to Trump
Another reason is that Sam Altman has been willing to "play ball" like providing high-profile (though meaningless) big announcements Trump likes to tout as successes. For example:
> "The Stargate AI data center project worth $500 billion, announced by US President Donald Trump in January 2025, is reportedly running into serious trouble.
More than a year after the announcement, the joint venture between OpenAI, Oracle, and Softbank hasn't hired any staff and isn't actively developing any data centers, The Information reports, citing three people involved in the "shelved idea."
https://the-decoder.com/stargates-500-billion-ai-infrastruct...
1 reply →
Sam donated $1M to Trump's inaugural fund. Dario did not.
http://magamoney.fyi/executives/samuel-h-altman/
> signed up with surveillance company Palantir
Just to nitpick, Palantir isn't doing surveillance like Flock. They do data integration the way IBM does under contract for the governments. Some data pipelines include law enforcement surveillance data which get integrated with other software/databases to help police analyze it. There's no evidence they are collecting it themselves despite recent headlines. It's a relatively minor but important distinction IMO.
https://www.wired.com/story/palantir-what-the-company-does/
They are providing the software to do surveillance, They are definetly bad actors, you can dance around this all you want, but they are in it.
It is an important distinction.
It’s the same with Facebook selling user data. Neither selling your data, like the carriers do, or selling the ability to target you with your data, like Facebook does, are very nice. But legally they are separate things that need to be regulated differently. As is the case with Flock and Palantir.
1 reply →
Nice assertion. Please provide citations, substance, or anything other than “you’re wrong definitely.”
7 replies →
Their data integration and sale allows for the government to surveil citizens without probable cause or warrants.
The solution is still no different than a decade ago. Far stricter laws on intelligence, federal and local police surveillance, and a reduction in executive power which oversteps checks and balances.
There will always be another IT company willing to do integrations even if Palantir dies. Software isn’t going away.
Sure, but it's not as if the DoD was planning on using Anthropic to _collect_ the data either? I assume that the hypothetical DoD use case Anthropic shied away from dealt with the processing of surveillance data, just like what Palantir does.
https://www.washingtonpost.com/technology/2026/03/04/anthrop...
> The military’s Maven Smart System, which is built by data mining company Palantir, is generating insights from an astonishing amount of classified data from satellites, surveillance and other intelligence, helping provide real-time targeting and target prioritization to military operations in Iran, according to three people familiar with the system...
> As planning for a potential strike in Iran was underway, Maven, powered by Claude, suggested hundreds of targets, issued precise location coordinates, and prioritized those targets according to importance, said two of the people.
It's funny you'd pick IBM:
https://en.wikipedia.org/wiki/IBM_and_the_Holocaust
Though, I guess IBM did get away with lots of stuff that... Actually, did any supply companies in the WWII German war machine actually get in trouble for war crimes, or did they just go after officers and the people actually working in the camps?
The company selling punchcards that were used for logistics was apparently fine. What about the people making the gas canisters, or supplying plumbing fixtures? The plumbers? Where's the line?
Wondering, since this is increasingly becoming a current events question instead of an academic concern.
There were the so-called Subsequent Nuremberg Trials (12 of them). Among them were the trials of IG Farben (gas chamber supplies, Zyklon B) and Krupp (armament of the German military forces in preparation of an aggressive war)
I'm under no illusion that all the perpetrators of war crimes were held accountable but it's not a bad model.
I think a company which provides a sensor fusion dragnet for a government-run mass domestic civilian surveillance system is at least as culpable (and odious) than the ones supplying the data.
> They do data integration the way IBM does under contract for the governments
Good thing IBM's data integration was never used for ill!
Oh, wait https://en.wikipedia.org/wiki/IBM_and_World_War_II
Oracle started by building databases for the CIA
Basically it’s glorified Excel.
Take it out on the database purveyors, not Palantir.
Sure, Palantir is just one tool in the chain, and it's a lot more boring than people make it out to be.
On the other hand, a comment like yours does smack a bit of "Once the rockets are up, who cares where they come down."
It might match. The red line was domestic surveillance. You don't know what deal they had. Giving Anthropic the benefit of the doubt, perhaps Palantir said "Deal, we won't use your tool domestically".
Every single time the box is flipped over, whats inside is "more domestic surveillance". Who in their right mind would give the benefit of the doubt?
Well, I think a company that stood their ground knowing full well they'd be designated a SCR deserves the benefit of the doubt.
Whether you disagree with whether it truly aligns with their stated values, in their partnership with Palantir (making Claude available within their AI platform) they requested consistent restrictions:
> “[We will] tailor use restrictions to the mission and legal authorities of a government entity” based on factors such as “the extent of the agency’s willingness to engage in ongoing dialogue,” Anthropic says in its terms. The terms, it notes, do not apply to AI systems it considers to “substantially increase the risk of catastrophic misuse,” show “low-level autonomous capabilities,” or that can be used for disinformation campaigns, the design or deployment of weapons, censorship, domestic surveillance, and malicious cyber operations.
Source: https://techcrunch.com/2024/11/07/anthropic-teams-up-with-pa...
Why do you assume the contract with palantir doesn't have similar terms? Weird assumption.
The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.
> The moral disposition of the Anthropic leaders doesn't matter because they don't own the company. Investors won't idly watch them decimate billions in ROI by alienating the largest institutional customers on the planet.
Anthropic is a Public Benefit Corporation chartered in Delaware, with an expressed commitment to "the responsible development and maintenance of advanced AI for the long-term benefit of humanity."
So in theory (IANAL), investors can't easily bully Anthropic into abandoning their mission statement unless they can convince a court that Anthropic deliberately aimed to prioritize the cause over profit.
Thank you. Anthropic also is culpable in the illegal war against Iran that started with the bombing and murder of an entire girls school.
https://www.cbsnews.com/news/anthropic-claude-ai-iran-war-u-...
If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.
They've done lots wrong and maybe they shouldn't have gotten in bed with the military to begin with, but this illegal war is not theirs. It rests squarely with the President who declared it. (And with the military officers who are going along with it despite the violation of international law.)
> If they're doing it against the terms of service (and publicly so), I can't pin that one on Anthropic.
Anthropic claim that superintelligence is coming, that unaligned AI is an existential threat to humanity, and they are the only ones responsible enough to control it.
If that's your world view, why would you be willing to accept someone's word that they'll only Do Good Things with it? And not just "someone", someone with access to the world's most powerful nuclear arsenal? A contract is meaningless if the world gets obliterated in nuclear war.
I don't think any AI company should get in bed with the military. That being said, if the terms of service have been violated, the account should be canceled.
1 reply →
It's just marketing.
I wish people like you would actually talk to people at Anthropic, maybe interview with the company, actually engage with the real humans there before making blithe comments like this.
Seriously, you're on HN, you can't possibly be that many degrees removed from someone at the company.
In any case it's absolutely not "just marketing", it suffuses their whole culture, and it is genuine.
They are all guilty.
[dead]
[flagged]
[flagged]
"The law" is the contract. The Pentagon agreed to terms of service. The law is not on the Pentagon's side. The contract did not change; what changed is the Pentagon breaking the contract.
Perhaps you think the law shouldn't allow such a contract; that's a valid position. But that's not what the law currently says.
I'm saying they shouldn't write in their contract that they have some veto power of how their software is used if it's within the law of the land (ie laws written by congress)
Is that more clear?
> if its within the law.
The current administration has been caught flouting court orders in dozens of cases, to the point that courts are no longer even granting them the assumption that they’re operating in good faith.
I can think of a million good reasons not to give these people the tools to implement automated totalitarianism. Your proposal that they simply refuse service to the government entirely would be ideal.
Yes we obv need large corporations to exert some kind of control over our elected officials.
1 reply →
The government works for the people, not the other way around. For the people, by the people and of the people.
If you don't question people in positions of power they will just do whatever they want. Democracy is sustained by action, not by acquiescence.
And with the lawlessness of this administration, I would make it a point to hold them accountable. I'm not going to let them do mass surveillance when they decide to change the law.
Are you native, or just ignoring what is going on?
I want people to question people in power. Thats kind of the point of democracy. But it's good to remember corporations aren't people :-)
It’s a service. Democracy doesn’t give the government the right to force you to perform a service.
The technology isn’t suitable for the purposes the regime wants.
They can choose to sell to government agencies or not. But selling to them and then trying to have some veto power is wrong. So it sounds like we're in a agreement.
I would like western Democratic powers to have the most advanced technology personally but you may disagree.
That is crazy. You are suggesting that corporations should have no power over their own IP.
Are you really saying that if Anthropic sells a limited version of their product to Palantir at a certain price, the government should be able to demand access to an unlimited version of Anthropic's product for free because they are a customer of Palantir?
That would effectively mean the government gets an unlimited license to all IP of companies that do business with government suppliers... that would be terrible.
Imagine if a gun manufacturer sold weapons to the military but said "don't use them is unjustified wars as we deem fit" seems wrong as we dont want gun manufacturers setting our foreign policy. Choose not to sell them sure, but this isn't "ownership of IP". If the feds were to ask for weights and torrent it out, sure IP. But this ain't that
1 reply →
This exchange between Anthropic and OpenAI feels a lot like theater. If I was really trying to stop abuses I wouldn't going out of my way to talk about it. The "public sees us as the hero's" bullshit feels like a smoke screen. Id make one statement and keep silent and let the public do the math and not get involved.