← Back to context

Comment by Gigachad

2 days ago

That’s the crazy thing. This whole dispute was over Anthropic saying no to fully automated kill bots. They only required there be a human in the loop to press the button.

Anthropic didn't even say "no", it was more of a "not yet, let's work on this".

I really wonder what Palantir's role in all this is because domestic surveillance sounds exactly like Palantir and whatever happened during the Maduro raid led to Anthropic asking Palantir questions which the news reports is the snowball that escalated to this.

  • Could you expand on that Anthropic asking Palantir connection to this?

    • This is a summary from Gemini of the news reporting:

      Recent news reports from February 2026 indicate that a significant rift developed between Anthropic and the Department of War (Pentagon) following the capture of Venezuelan President Nicolás Maduro in January 2026.

      According to a report by the Wall Street Journal (referenced by TRT World and others on February 14–15, 2026), the controversy originated when an Anthropic employee contacted a counterpart at Palantir Technologies to inquire about how Claude had been used during the raid. Key Details of the Reports:

      * Discovery of Use: Anthropic reportedly became aware that its AI model, Claude, was used in the classified military operation through its existing partnership with Palantir. This was allegedly the first time an Anthropic model was confirmed to be involved in a high-profile, classified kinetic operation.

      * The Inquest: The Wall Street Journal and Semafor reported that an Anthropic staff member reached out to Palantir to ask for specifics on Claude's role. This inquiry reportedly "triggered the current crisis" because it signaled to the Pentagon that Anthropic was attempting to monitor or place "ad hoc" limits on how its technology was being used in active missions.

      * The Confrontation: During a recent meeting between Anthropic CEO Dario Amodei and Defense Secretary Pete Hegseth, the inquiry to Palantir was a point of contention. Hegseth reportedly claimed Anthropic had raised concerns directly to Palantir about the Caracas raid. Amodei has since denied that the company raised objections to specific operations, characterizing the exchange with Palantir as a routine technical follow-up or a "self-serving characterization" by Palantir.

      * Current Status: This friction has escalated into a public showdown. Today, Friday, February 27, 2026, reports indicate that the Trump administration has officially designated Anthropic a "supply chain risk" and ordered federal agencies to cease using Claude after the company refused to remove guardrails related to autonomous weaponry and mass domestic surveillance.

      The primary reporting you are likely recalling comes from The Wall Street Journal (approx. February 14, 2026) and was later expanded upon by Semafor regarding the specific communications between Anthropic and Palantir employees.

They also said no to fully automated AI domestic surveillance. I suppose non-US citizens like me are screwed but that's at least some small comfort for the natives. FVEY will just spy on each other and share but at least someone tried.

There were two red lines, as I understand it -- first, automated kill bots, and second, mass surveillance.

  • Mass domestic surveillance of American citizens (they were OK with surveillance of other countries).

  • Neither of those red lines should be controversial. What American citizen thinks terminators and Big Brother are desirable?

    • The ones that still assume big brother will be spying on and killing the people they hate. Trump openly campaigned on getting revenge on his enemies. I can only assume his supporters want this. The danger of course is if/when the leopards eat their faces

  • I guess the problem for Trump is if he orders the army to gun down protesters, there’s a good chance they will refuse to do it. While a bot can just be prompted to go ahead.

    • Crazy to think how Deus Ex: Human revolution might have gotten the timeline right. Except there's no human augmentation and there won't be citizen fighting four-legged robot police in 2027 Detroit with molotov cocktails: they'll only hear a disconcerting buzz coming at them with ludicrous speed before eternal darkness.

I think it’s far more likely this is about the other sticking point- using it to spy on US citizens.

If we were able to give the Ukrainians fully automated kill bots, and those kill bots enabled Ukraine to swiftly expel the Russians from their territories, would that not be a good thing? Or would you rather the meat grinder continue to destroy Ukraine's young men to satisfy some moral purity threshold?

If we could give Taiwan killbots that would ensure China could never invade, or at least could never occupy Taiwan, would that be good or bad? I have a feeling I know what the Taiwanese would say.

While we're at it, should we also strip out all the machine learning/AI driven targeting systems from weapons? We might feel good about it, but I would bet my life savings that our future adversaries will not do the same.

  • You seem to see everything from a binary perspective. China bad, Taiwan good. Russia bad, Ukraine good.

    The world is more nuanced than that.

    But to answer your question. No we should not give anyone automatic kill bots. Automatic kill bots shouldn’t even be a thing.

    • Yes, I think Russia's invasion of Ukraine is quite clearly a binary Russia=bad, Ukraine=good. Same for the impending Chinese invasion of Taiwan. Perhaps you could explain the nuances under which Russia was the good guy? Better yet, maybe you could explain it to the Ukrainians who have been displaced, or the family members of those who have been killed, or the soldiers who have been permanently maimed?

      Whether you or I like it or not, automatic kill bots will be a thing. It will only be a question of which countries have them and which do not.

      1 reply →

  • Rephrasing your "inquiry" to highlight how short-sighted this is:

    If giving the ukranians nuclear warheads could help them default Russia, then isn't that good? Wouldn't using nuclear warheads to erradicate Russia end the war almost immediately?

    Like, why are we even bothering with automated killing robots? That's stupid. We already have nukes, and they're the ultimate weapon, so just do that.

    Do you not see how this greedy line of logic could easily lead to the destruction of not just the US, but the entire human race?

    This is LITERALLY the plot line of Terminator. Literally. "Hey guys let's build skynet, isn't that a good idea??"

    Like... do you not hear yourself? What is not clicking here?

    • > This is LITERALLY the plot line of Terminator. Literally.

      No, it's not. Skynet was a recursively self improving ASI. You are conflating an autokill bot and, apparently, an ASI that can embody and replicate itself.

      > If giving the ukranians nuclear warheads could help them default Russia, then isn't that good?

      Surely, you can recognize how an autokill bot and a thermonucelar weapon are different, right? These are categorically different concepts. What's more, Russia is a nuclear armed opponent with, reportedly, dead man's hand systems that would launch their entire nuclear arsenal even if their command structure is destroyed in a nuclear first strike.

      I'll just repeat the basic point here: autokill bots are coming. Whether any of us like it or not. Just like nuclear weapons. If I could wave a magic wand and eliminate all weapons of mass destruction in the world, I would. But that's not reality. So, walk me through how you think this plays out if we don't develop them, but Russia, China, etc. do?

      I can't think of a more clear cut case of moral, justified deployment of autokill bots than to aid Ukraine in expelling the Russian invaders.

      2 replies →

  • The thing about building fulling automated kill bots is then you've built fully automated kill bots.

    • Fully automated kill bots are coming, whether any of us like it or not. The question is, which militaries will have them, and which militaries will be sitting ducks? China is pursuing autonomous weapons at full speed.

      Personally, I think it'd be great to have the Anthropic people at the table in the creation of such horrors, if only to help curb the excesses and incompetencies of other potential offerings.

  • Ukrainian young (24 y.o.) man here. Living and working in police 30 kilometres away from the actual frontline.

    No, thanks, we don't need those "fully automated kill bots". There's absolutely no guarantee that they wouldn't kill the operator (I mean, the one who directs them) or human ally.

    We're pretty much fine with drone technology we have.

    But for me personally, that's not the most important point. What is more important - and what almost no one in the Western countries seems to realise (no offence, but many of westerners seem to be kind of binary-minded: it's either 0xFFFFFF or 0x000000, no middle ground at all) - is that on the Russian side, soldiers are not "fully automated kill bots" either. Sure, there's a lot of... let's say - war criminals. Yes, for sure. But en masse they are the same young men that you can see on the Ukrainian side. Moreover, many people in Ukraine have relatives in Russia, and there already were the cases where two siblings were in different armies, literally fighting with each other. So in my opinion, "fully automated kill bots" are not an option here. At least unless you deploy them in Moscow and St. Peterburg to neutralize all of the Russian elites, military commandment and other decision-making persons of the current regime.