Comment by qaid
10 hours ago
I was reading halfway thru and one line struck a nerve with me:
> But today, frontier AI systems are simply not reliable enough to power fully autonomous weapons.
So not today, but the door is open for this after AI systems have gathered enough "training data"?
Then I re-read the previous paragraph and realized it's specifically only criticizing
> AI-driven domestic mass surveillance
And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
A real shame. I thought "Anthropic" was about being concerned about humans, and not "My people" vs. "Your people." But I suppose I should have expected all of this from a public statement about discussions with the Department of War
See also: OpenAI being open, Democratic People's Republic of Korea being democratic and peoples-first[0].
[0] https://tvtropes.org/pmwiki/pmwiki.php/Main/PeoplesRepublicO...
Elon, is that you?
Is GP wrong?
Also, as someone from a country that has been attacked and dragged into war, I would prefer machines fighting (and being destroyed autonomously) rather than my people dying, nor people from any nation that came to help.
That's as Anthropic as it gets if your nerve expands a little bit further than your HOA.
>> I would prefer machines fighting (and being destroyed autonomously) rather than my people dying
What makes you think in any war the machines would stop at just fighting other machines?
The more likely scenario will be "your people" dying in a war against machines that don't tend to disregard illegal orders.
I'm glad I'm not alone in finding the specific emphasis on drawing the line at domestic surveillance a bit odd. Later they also state they are against "provid[ing] a product that puts America’s warfighters and civilians at risk" (emphasis mine). Either way I'm glad they have lines at all, but it doesn't come across as particularly reassuring for people in places the US targets (wedding hosts and guests for example).
See also: the entire history of Silicon Valley
When Google Met Wikileaks is a fun read, billionaire CEOs love to take Americas side.
I think it's phrased just fine. It's not up to Dario to try to make absolute statements about the future.
How about the present and his personal beliefs?
"I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries."
This reads like his objection is not on "autocratic", but on "adversaries". Autocratic friends & family are cool with him. A clear wink to a certain administration with autocratic tendencies.
Some people can’t help themselves to read this like a Ouija board.
That all works right up until the United States becomes autocratic and that process is well underway.
So yes, the second part of your comment is what is going to come back to haunt them. The road to hell is paved with the best intentions.
Western liberal ideals are better than the opposite. It is misanthropic to build autocratic societies.
1 reply →
> It's not up to Dario to try to make absolute statements about the future.
Thats insane to say, given that he's literally acting in the public sphere as the mouth of Sauron for how AI will grow so effective as to destroy almost everyone's jobs and AGI will take over our society and kill us all.
All I'm trying to say is that nobody can predict the future, and therefore saying statements pretending something will be a certain way forever is just silly. It's OK for him to add this qualifier.
This doesn’t read to me like it was personally written by one person. It’s not Dario we should read this as being written by, it’s Anthropic as an entity.
He does it all the time when it helps selling his products though, strange
He does it all the time.
And yet he’s quite happy to make just that when it’s meant to drum you up his own product for investors
He’s one of the most influential people when it comes to what future we’ll have. Yes, it’s up to him.
I think he's more pragmatic than that.
What a shame, indeed. Chinese and Russians would never do something like that and hurt either their or your people, too
I think it goes without saying that ones the systems are reliable, fully-autonomous weapons will be unleashed on the battlefield. But they have to have safeguards to ensure that they don't turn on friendly forces and only kill the enemy. What Anthropic is saying, is that right now - they can't provide those assurances. When they can - I suspect those restrictions will be relaxed.
US military cannot even offer those assurances themselves today. I tried to look up the last incident of friendly fire. Turns out it was a couple hours ago today, when US military shot down a DHS drone in Texas.
Humans malfunction all the time, that is why there is a push to replace them with more reliable hardware.
[dead]
What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?
> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?
Yes. Absolutely.
And what? Get nationalized? Get labelled as terrorists?
The US system doesn't empower a company to say no. It should though.
7 replies →
Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?
> I absolutely don’t want tech companies to use the money I pay them to harm people.
Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.
I am unaware of any tech company that directly does physical warfare on the battlefield against humans.
4 replies →
Time to stop paying your taxes. :P
Because it's painfully short-sighted, or maliciously ignorant.
3 replies →
I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.
There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)
You can take issue with that argument if you want but it’s unconvincing not to address it.
7 replies →
So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?)
10 replies →
Well, if they hadn't stated that were that far in line with the administration's ideals, they would likely already be fully blacklisted as enemies of the state. Whether they agree with what they're saying or not, they're walking on egg shells.
Is it seriously called the department of war now? Did they change that from DoD?
illegally, but yes
Fully autonomous weapons are a danger even if we can reliably make it happen with or without AI.
It essentially becomes a computer against human. And such software if and when developed, who's going to stop it from going to the masses? imagine a software virues/malwares that can take a life.
I'm shocked very few are even bothered about this and is really concerning that technology developed for the welfare could become something totally against humans.
They also posted on Instagram saying autonomous killing would hurt Americans. So non American people don’t matter?
As a practical matter, it makes zero sense for a tech company with perhaps laudable goals and concerns about humanity to have any control whatsoever over the use of a product it sells for war. You don't like what it could potentially be used for, or are having second thoughts about being involved in war making at all, don't sell it, which appears to be Amodei's position now. That's perhaps laudable, from a certain point of view.
On the other hand, your position is at best misguided and at worst hopelessly naive. The probability that adversaries of the United States, potential or not, are having these discussions about AI release authority and HITL kill chains is basically zero, other then doing so at a technical level so they get them right. We're over the event horizon already, and into some very harsh and brutal game theory.
They didn’t sell it no strings attached, they sold it with explicit restrictions in their contract with DoW and the DoW agreed to that contract. Their mistake was assuming they operate in a country where rule of law is respected, clearly not the case anymore given the 1000s of violations in the last year.
Contracts evolve, don't be naive. If you invent the Giga Missile and the government buys it for its war machine, and then you invent the God Missile right after, the government is going to come back again to renegotiate terms.
You gotta keep in mind that the primary goal of this statement is to avert the invocation of the defense production act.
He is trying to win sympathies even (or especially?) among nationalist hawks.
I said exactly this a few days ago elsewhere. It’s disappointing that they (and often other American companies) seem to restrict their “respect” and morals to Americans only. Or maybe it’s just semantics or context because the topic at hand is about americans? I don’t know but it gives “my people are more important than your people”, exactly as you said in your last paragraph
They’re being used today by the military. So, they are never going to be against mass surveillance. They can scope that to be domestic mass surveillance though.
We already have traditional CV algorithms and control systems that can reliably power autonomous weapons systems and they are more deterministic and reliable than "AI" or LLMs.
But then a person can be blamed for the outcome. We can't have that!
> the door is open for this after AI systems have gathered enough "training data"?
Sounds more like the door is open for this once reliability targets are met.
I don't think that's unreasonable. Hardware and regular software also have their own reliability limitations, not to mention the meatsacks behind the joystick.
Unfortunately I think the writing is clearly on the wall. Fully autonomous weapons are coming soon
And that's the end of democracy. One of the safe guards of democracy is a military that is trained to not turn against the citizens. Once a government has fully autonomous weapons its game over. They can point those weapons at the populous at the flip of the switch.
The parallel for this is when Rome changed from only recruiting citizens for their army to recruiting anyone who could pass the physical. They had no choice, and the new armies were much better at fighting. But the soldiers also didn’t have the same stake in the republic that voting citizens did.
Citizens were loyal to Rome. Soldiers were loyal to their commanders. If commanders wanted to launch rebellions, the soldiers would likely support them.
A commander who commands the loyalty of legions by convincing a handful of drone operators would be very dangerous for democracy.
The original Terminator movie doesn’t seem so far fetched now (minus the time travel).
[dead]
Right - for the same reasons a Waymo is safer than a human-driven car, an autonomous fighter drone will ultimately be deadlier than a human-flown fighter jet. I would like to forestall that day as long as possible but saying "no autonomous weapons ever" isn't very realistic right now.
If they had access to them in Ukraine, both sides would already be using them I expect. Right now jamming of drones is a huge obstacle. One way it's dealt with is to run literal wired drones with massive spools of cable strung out behind them. A fully autonomous drone would be a significant advantage in this environment.
I'm not making a values judgment here, just saying that they will absolutely be used in war as soon as it's feasible to do so. The only exception I could see is if the world managed to come together and sign a treaty explicitly banning the use of autonomous weapons, but it's hard for me to see that happening in the near future.
Edit: come to think of it, you could argue a landmine is a fully autonomous weapon already.
Hah, I had the same realization about landmines. Along with the other commenter, really it would be better to add intelligence to these autonomous systems to limit the nastiness of the currently-deployed systems. If a landmine could distinguish between a real target and an innocent civilian 50yrs later, it's be a lot better.
4 replies →
It's only Anthropic with their current models saying no. Fully autonomous weapons have been created, deployed, and have been operational for a long time already. The only holdout I've ever heard of is for the weapons that target humans.
Honestly, even landmines could easily be considered fully autonomous weapons and they don't care if you're human or not.
There are also good reasons for a lot of countries banning mines. https://en.wikipedia.org/wiki/Ottawa_Treaty
Notably USA is not one of those signatories.
The Ghandi of the corporate world is yet to be found
Considering he slept naked with his grandniece (he was in his 70s, she was 17), I'd say there are a lot of them in the corporate world. Though probably more in politics.
I think I am paraphrasing some hackernews discussion that I saw about it prior but The problem with gandhi was that he was so focused in idealism and that translates into somehow a utilitarian line of thinking to this thing which is of course a very despicable and vile thing for him to do.
There have been quite a lot discussions about this itself on Gandhi here on Hackernews as well.
Gandhi itself became the face of satyagrah movement considering he started it but that movement only had values because of many important people joining in.
Here is a quote from Martin Luther King Jr that I found about satyagrah from wikipedia
Like most people, I had heard of Gandhi, but I had never studied him seriously. As I read I became deeply fascinated by his campaigns of nonviolent resistance. I was particularly moved by his Salt March to the Sea and his numerous fasts. The whole concept of Satyagraha (Satya is truth which equals love, and agraha is force; Satyagraha, therefore, means truth force or love force) was profoundly significant to me. As I delved deeper into the philosophy of Gandhi, my skepticism concerning the power of love gradually diminished, and I came to see for the first time its potency in the area of social reform. ... It was in this Gandhian emphasis on love and nonviolence that I discovered the method for social reform that I had been seeking.[25]
It's better to wish for more satyagrahis to be named but I don't think that the western media might catch on to it.
Ghaffar Khan, Sarojini Naidu, Vinoba Bhave are all people who I think have a simple life history while being from different religions and castes and genders while adhering to the philosophy of satyagrah.
That being said, Satyagrah might not work in the current contexts because Britain was only able to rule India with the help of Indians which was why satyagrah movement was so successful. But if, the govt can get hands onto autonomous drones capable of killing civilians and mass surveilance then satyagrah might not work as much in the near future
(the two things Anthropic is denying to provide to the DOD, vis-a-vis the article itself)
I don't think Anthropic is a great company, it certainly has its flaws but I do think that it is very admirable of them to stand even when the govt.s is essentially saying to follow them or they will literally kill the business with the 3-4 national security laws that they are proposing to invoke on Anthropic.
I do urge to say satyagrah or mention other peaceful protests because usually whenever people talk about gandhi now, this discussion is bound to come which really alienates from the original thing at times. It was the collective efforts of the blood of so so many Indian leaders for India to gain independence.
Enemies will have AI powered weapons. We need to be at the cutting edge of capability.
I don't know where you might get your info from but Anthropic has only denied using Autonomous AI to kill humans without anyone pressing a button/having some liabilty on and mass surveillance.
I don't think that your point makes sense especially when you can have enemies within your own administration/country who can use the same weapons to hunt you.
I don't think that the people operating the drones are a bottleneck for a war between your country and your enemies but rather its a bottleneck for a war between your country and its people. The bottleneck is of morality as you would find less people willing to do the same atrocities to their own community but terminator style AI is an orphan with no community ie. it has no problem following any orders from the govt. and THIS is the core of the argument because Anthropic has safeguards to reject such orders and DOD is threatening to essentially kill the company by invoking many laws to force it to give.
> And neither denounces partially autonomous mass surveillance nor closes the door on AI-driven foreign mass surveillance
You have to be deliberately naive in a world where five eyes exists to somehow believe that "foreign" mass surveillance won't be used domestically.
The sentence prior explicitly says this. There’s no dishonesty here.
“Even fully autonomous weapons (…) may prove critical for our national defense”
FWIW there’s simply no way around this in the end. If your even attempts to create such weapons, the only possible defensive counter is weapons of a similar nature.
To stop a bullet flying at you you need a shield not another bullet.
So AI systems are not reliable enough to power fully autonomous weapons but they are reliable enough to end all white-collar work in the next 12 months?
Odd.
do you really need to be told there is a difference in 'magnitude of importance' between the decision to send out an office memo and the decision to strike a building with ordinance?
a lot of white collar jobs see no decision more important than a few hours of revenue. that's the difference: you can afford to fuck up in that environment.
I know what point you are trying to make, but these decisions are functionally equivalent.
Striking a building with ordinance (indirect fires, dropped from fixed wing, doesn't really matter) involves some discernment about utility, secondary effects, probability of accomplishing a given goal, and so on. Writing an office memo (a good one at least) involves the same kind of analysis. I know your point is that "people will die" when you blow up a building, but the parameters are really quite similar.
They’re not saying “AI can replace some menial white collar tasks”, they’re saying AI can replace all white-collar work.
Yes, if you fuck up some white collar work, people will die. It’s irresponsible.
1 reply →
[dead]
Shh! there's a lot of money riding on this bet, ahem.
Anthropic doesn't forbid DoW from using the models for foreign surveillance. It's not about harming others, it's about doing what is best for humanity in the long run, all things considered. I personally do not believe that foreign surveillance is automatically harmful and I'm fine with our military doing it
If we are talking about what's best for humanity in the long run.. thinking about human values in general, what makes American citizens uniquely deserving of privacy rights, in ways that citizens of other countries are not?
Snowden revealed that every single call on Bahamas were being monitored by NSA [1]. That was in 2013. How would this be any worse if it were US citizens instead?
(Note, I myself am not an US citizen)
Anyway, regardless of that, the established practice is for the five eyes countries to spy on each other and share their results. This means that the UK can spy on US citizens, the US can spy on UK citizens, and through intelligence sharing they effectively spy on their own citizens. That's what supporting "foreign surveillance" will buy you. That was also revealed in 2013 by Snowden [2]
[1] https://theintercept.com/2014/05/19/data-pirates-caribbean-n...
[2] https://www.theguardian.com/world/2013/dec/02/nsa-files-spyi...
This isn't about privacy rights, it's about war
I'm not suggesting that Anthropics models should be used by foreign governments for domestic surveillance
I'm not worried about foreign governments spying on Americans, as long as the US government is aligned. I'm worried about my own government becoming misaligned
4 replies →
If the United States is ever, in the future, at war with an adversary using truly autonomous and functional killing machines; you may find yourself praying that we have our own rather than praying human nature changes. Of course, we must strive for this to never happen; but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.
> but carrying a huge stick seems to be the most effective way to reduce human death and suffering from armed conflict.
Citation needed. I believe there's at least some research showing the opposite: military buildup leads to a higher risk of military conflict