← Back to context

Comment by mrandish

4 days ago

Obviously, domestic surveillance of U.S. citizens is bad but before even getting to that, the thing that doesn't make sense is: it's illegal for the DoD to do that (unless the citizens are military or DoD employees).

And, does anyone seriously think developing autonomous kill-bots without a human in the loop in the next 3 years is something the DoD should be unilaterally doing now without congressional review? Personally, I think autonomous kill bots with a human in the loop, with congressional review, and even 10 years from now are categorically a terrible idea.

However, I can imagine some reasonable people perhaps quibbling over saying never by citing things like "sufficient safeguards", "congressional oversight" and at a future time where AIs don't hallucinate constantly. But none of that is in contention here. The DoD is publicly proclaiming their need to do things right now which are either A. illegal, or B. no serious person thinks is sane.

Personally, I think autonomous kill bots with a human in the loop, with congressional review, and even 10 years from now are categorically a terrible idea.

Pretty sure these exist today...

  • https://en.wikipedia.org/wiki/MIM-104_Patriot

      Patriot was one of the first tactical systems in the U.S. Department of Defense (DoD) to employ lethal autonomy in combat.

    • I was making a distinction between AI-based autonomy which is less deterministic and currently subject to unpredictable hallucinations vs 'automatic' based on if/then heuristics and thresholds which can range from as simple as a Claymore mine with a proximity trigger to the MIM-104 you linked.

      I'm not an expert but my understanding is the MIM-104 is more akin to complex automatic systems like a modern airliner auto-pilot and both are materially different than transformer-based LLMs.