Comment by hintymad

5 hours ago

Honest question: why do people automatically equate "fully autonomous weapons" to something like killer robot? My immediate reaction is that even the best-in-class rapid-fire gun has a hard time identifying and tracking drones. So, we'd need AI to do better tracking, which leads to a fully autonomous weapon. And I really don't get why that's a bad thing.

Of course, a company should have freedom to choose not to do business with the government. I just think that automatically assuming the worst intention of the government is not as productive as setting up good enough legal framework to limit government's power.

What you are describing would be "partially autonomous." Per Dario Amodei's original statement here: https://www.anthropic.com/news/statement-department-of-war he had no issue with that. "Fully autonomous" specifically means that the AI chooses a target and engages without any human intervention at all. If the human selects or approves a target, and the weapon then automates tracking and engagement, that's still only partially autonomous.

I’m not sure that “killer robot” is the actual concern outside of media hyperbole. I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.

In a world where LLMs produce very convincing but subtly wrong output, this makes me uncomfortable. I get that warfare without AI is in the past now, but war and rules of engagement and AI output etc etc etc all seem fuzzy enough that this is not yet a good call even if you agree with the end goals.

  • > I’m imagining a loitering munition-type drone that has some kind of targeting package loaded into it with different parameters describing what it should seek and destroy. Instead of waiting for intelligence and using human command to put the munition on target, it hangs out and then engages when it’s certain enough that it’s found something valid.

    I'm sorry, you've just literally described a "killer robot" in more words.

    • The only saving grace is that the killbots had a pre-set kill limit which I exceeded by throwing wave after wave of my own men at them until they simply shut down.

    • Yeah, I guess my point is that “killer robot” evokes a terminator-like image for a lot of people. Something that marches around and kills of its own accord. I don’t like either one, but I don’t think they’re the same thing.

  • Dario himself said that he was against using Claude to build a fully automated weapon because the technology was far from perfect, so he didn't want to hurt our soldiers or innocent people. I think his description matched a killer robot, and I don't agree with his reasoning because it's not like the military researchers didn't have the agency to find out what works and what doesn't.

    • On the other hand military researchers once considered training pigeons to act as torpedo guidance systems by pecking on levers.

We have traditional autonomous weapons (and counter-defense). They operate on millisecond or faster timescales with existing RF sensors. They are not and will not be using LLMs or other transformers. Maybe ChatGPT will update some realtime Ada code; they formally verify some of that stuff so maybe that won't be terrifyingly dangerous.

Where autonomous transformer-based munitions will be used are basically "here is a photo of a face, find and kill this human" and loitering munitions will take their time analyzing video and then decide to identify and attack a target on their own.

EDIT: Or worse: "identify suspicious humans and kill them"

We all do business with the government. We pay the military to protect our gold. It is fundamentally a protection racket that we voted for. And one could argue that the military, as the protector of your gold, has the final decision as to what it can and can't do with your technology.

Oh, you think the current administration only wants robots that kill other robots! Sweet Summer Child!

Its not fully autonomous ice cream machines, its fully autonomous _weapons_. are you stupid or are you dumb? I don't think you're asking an honest question.

Please define what kind of fully autonomous weapons system the Pentagon would build wouldn't be designed to kill people.

For that matter, explain why the Pentagon would balk at not spying on every American.