← Back to context

Comment by jacquesm

9 days ago

> i hope we never assign a piece of code, AI or not, to be the decision maker.

This is already a past station, just not at Airbus.

> just not at Airbus

Airbus has publicized that it is working on a Project Maven style project with France's DGA [0][1].

Thales also publicly launched and demonstrated SkyDefender a couple days ago [2].

Mistral AI also announced in January 2026 that it is working with the DGA to productionize it's models for military applications [3] - ironically similar in manner to how the DoD was using Claude but is now using Gemini and GPT.

No country is going to leave networked, autonomous offensive and defensive capabilities on the table.

[0] - https://www.reuters.com/business/aerospace-defense/airbus-wi...

[1] - https://www.janes.com/osint-insights/defence-news/defence/ai...

[2] - https://www.janes.com/osint-insights/defence-news/air/thales...

[3] - https://www.linkedin.com/posts/marjorietoucas_were-happy-to-...

Yes, we are ~4-5 years into AI kill-chains now, though maybe only 1-3 with full autonomy.

  • You haven't been paying attention. We are at least 47 years into AI kill chains.

    https://www.vp4association.com/aircraft-information-2/32-2/m...

    • That's not really what they meant. They meant that the weapon is guided by software that decides which targets to pick and autonomously makes that decision without a human in the loop. The device seeks you instead of you going to it.

      A landmine has no friend-or-foe-or-noncombatant decision engine, it will kill you or maim you just like it will kill or maim the guy that laid it or any other passer by.

      4 replies →

    • Those don't self launch.

      I'm talking about systems that classify thousands of targets at once and can self-launch. Computerized kill chain.

      2 replies →