Comment by TaupeRanger

9 hours ago

What else would you expect? The military is obviously going to develop the most powerful systems they can. Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”? What if Anthropic ends up developing the safest, most cost effective systems for that purpose?

> Do you want a tech company to say “the military can never use our stuff for autonomous systems forever, the end”?

Yes. Absolutely.

  • And what? Get nationalized? Get labelled as terrorists?

    The US system doesn't empower a company to say no. It should though.

    • You, me or a company don’t need a system empowerments to say "no" though. Just say it. I would certainly choose being called "terrorist" in front of the class over helping to deploy weapons, let alone autonomous ones.

      You own nothing but your opinion. (No offense to personal property aficionados)

      5 replies →

Yes, I absolutely don’t want tech companies to use the money I pay them to harm people. How is that remotely controversial?

  • > I absolutely don’t want tech companies to use the money I pay them to harm people.

    Just one example of many, but the companies that make the CPUs you and all of use use every day, also supply to militaries.

    I am unaware of any tech company that directly does physical warfare on the battlefield against humans.

    • Another example: those companies that make drinkable water, also supply to militaries. But there might be a difference between supplying drinking water and making AI killing machines

      2 replies →

I'd prefer companies not help the military develop the most powerful weapons possible given we're in the age of WMDs, have already had two devastating world wars and a nuclear arms race that puts humanity under permanent risk.

  • There is an extremely straightforward argument that WMDs are precisely what prevented the outbreak of direct warfare between major powers in the latter 20th. (Note that WWI by itself wasn’t sufficient to prevent WWII!)

    You can take issue with that argument if you want but it’s unconvincing not to address it.

    • There’s also an extremely straightforward argument that if the current crop of authoritarian dictatorial players in power now had been then that the outcome of the latter 20th would have been much different.

      1 reply →

    • Great, now go ahead and prove that AI also reaches strategic equilibrium. This was pretty much self-evident with nuclear weapons so should probably be self-evident for AI too, if it were true.

    • That's a little bit like saying the bullet in the gun prevented someone getting shot while playing Russian Roulette. We pulled back that hammer several times, and it's purely happenstance that it didn't go off. MAD has that acronym for a reason.

      3 replies →

  • So would you have preferred the Nazis to develop the most powerful weapons and they win the world war? (which they were trying to do?)

    • With the benefit of hindsight we know the Nazis in fact were not racing to develop The Bomb. Reasonable assumption to have oriented around at the time though.

      2 replies →

    • If Anthropic does give the DoD what they want, does that magically stop China, Iran, Russia, etc from advancing in AI arms development?

      If Anthropic doesn't give the DoD what they want, does that mean that China, Iran, Russia, etc magically leapfrog not only Anthropic, but the entire US defense industry, and take over the planet?

      2 replies →