No Target, No Trigger, No Mercy

In 1943, a German pilot spared a shattered Allied bomber, choosing mercy where orders demanded execution. AI in warfare would not have paused. It would have scanned, confirmed, and fired, not from hatred but from code. Humans still draw fragile lines in war: a flag, a hand, a refusal. Machines do not see lines, only patterns, and once flagged as enemy, context collapses.

AI in Warfare and The Ethical Line Between Conduct and Execution

In 1943, high above Germany, an Allied bomber was falling apart. The crew was injured. The aircraft barely held shape. It had no chance of making it home.

A German fighter approached. His name was Franz Stigler. He was a decorated pilot, close to earning another medal. One more kill would do it.

He flew beside the bomber. He saw blood on the fuselage. He saw men slumped in their seats.

No one fired.

Stigler held position. He stayed with them until the  danger passed. Then he turned away. He made a decision his uniform didn’t permit. He spared them, not as a gesture, but as a refusal.1

The system told him to finish the job. He chose not to. That moment cannot be programmed.

The Logic of Instruction

An AI system would not have paused. It would not have seen a cockpit. It would not have recognised pain. It would have scanned the object, confirmed the parameters, and fired. Not out of malice. Out of instruction.

That is what makes it dangerous.

There is no room for hesitation. No space for mercy. No instinct that might override the logic of the mission. A system like that does not fight war. It executes it.2

The Human Line in AI in warfare

Warfare has always been brutal. But within it, there have been lines. Thin ones. Sometimes only

visible because a human chose not to cross them. A white flag. An open hand. A decision to wait.

These are not relics. They are the last proof that someone was still capable of seeing the person behind the uniform.

Machines do not recognise persons. Only patterns. And once a pattern is flagged as enemy, every layer of context collapses.3

One Day in the Trenches

In 1914, on the Western Front, soldiers climbed out of trenches and crossed into no man’s land.

They shook hands. They sang. Some exchanged food. Others buried the dead. It lasted a day. No command ordered it. No treaty allowed it. It happened because the people holding the rifles chose to put them down.

That moment was not a weakness. It was a boundary. A shared understanding that even in war, not everything should be permitted. Not everything must be done.

A machine will never call that off.4

What We’re Really Testing

What is being tested now is not just how AI fights, but whether we are willing to fight without a conscience.

Because once a weapon cannot disobey, every decision becomes irreversible. Once the trigger is code, there is no one to blame. And once we stop insisting on the right to stop, to see, to choose differently, we have not built an ally.

We have built a war crime that operates without pause.5

References

Header Image generated with ChatGPT

  1. Makos, A., & Alexander, L. (2012). A higher call: An incredible true story of combat and chivalry in the war-torn skies of World War II. Dutton Caliber.
  2. United Nations Institute for Disarmament Research. (2014). The weaponization of increasingly autonomous technologies: Considering how meaningful human control might move the discussion forward. UNIDIR. https://unidir.org/publication/weaponization-increasingly-autonomous-technologies
  3. International Committee of the Red Cross & Stockholm International Peace Research Institute. (2020). Limits on autonomy in weapon systems: Identifying practical elements of human control. ICRC & SIPRI. https://www.icrc.org/en/document/limits-autonomous-weapons
  4. Imperial War Museums. (n.d.). The real story of the Christmas truce. https://www.iwm.org.uk/history/the-real-story-of-the-christmas-truce
  5. International Committee of the Red Cross. (2019, June 6). Artificial intelligence and machine learning in armed conflict: A human-centred approach. International Committee of the Red Cross.
Picture of Diogo Mendes

Diogo Mendes

Diogo Vicente Mendes writes on Medium and for Digital Peace as an AI Structural Ethicist, focusing on the architectures of risk, refusal, and accountability in emerging technologies. His perspective is rooted in lived practice and a diverse background observing and caring for systems under strain, from healthcare to digital platforms. He uses writing to map where systems break, and where human choice still matters.

Join the Discourse

Subscribe
Notify of
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Your Monthly Brief on Technology, Power & Peace

Technology reshapes conflicts, democracy and humanity in real-time. Are you tracking its impact?

Start tracking technology’s impact on peace and democracy.

I agree to receive monthly newsletters and accept data processing as outlined in the data protection policy.