AI in Warfare and The Ethical Line Between Conduct and Execution
In 1943, high above Germany, an Allied bomber was falling apart. The crew was injured. The aircraft barely held shape. It had no chance of making it home.
A German fighter approached. His name was Franz Stigler. He was a decorated pilot, close to earning another medal. One more kill would do it.
He flew beside the bomber. He saw blood on the fuselage. He saw men slumped in their seats.
No one fired.
Stigler held position. He stayed with them until the danger passed. Then he turned away. He made a decision his uniform didn’t permit. He spared them, not as a gesture, but as a refusal.1
The system told him to finish the job. He chose not to. That moment cannot be programmed.
The Logic of Instruction
An AI system would not have paused. It would not have seen a cockpit. It would not have recognised pain. It would have scanned the object, confirmed the parameters, and fired. Not out of malice. Out of instruction.
That is what makes it dangerous.
There is no room for hesitation. No space for mercy. No instinct that might override the logic of the mission. A system like that does not fight war. It executes it.2
The Human Line in AI in warfare
Warfare has always been brutal. But within it, there have been lines. Thin ones. Sometimes only
visible because a human chose not to cross them. A white flag. An open hand. A decision to wait.
These are not relics. They are the last proof that someone was still capable of seeing the person behind the uniform.
Machines do not recognise persons. Only patterns. And once a pattern is flagged as enemy, every layer of context collapses.3
One Day in the Trenches
In 1914, on the Western Front, soldiers climbed out of trenches and crossed into no man’s land.
They shook hands. They sang. Some exchanged food. Others buried the dead. It lasted a day. No command ordered it. No treaty allowed it. It happened because the people holding the rifles chose to put them down.
That moment was not a weakness. It was a boundary. A shared understanding that even in war, not everything should be permitted. Not everything must be done.
A machine will never call that off.4
What We’re Really Testing
What is being tested now is not just how AI fights, but whether we are willing to fight without a conscience.
Because once a weapon cannot disobey, every decision becomes irreversible. Once the trigger is code, there is no one to blame. And once we stop insisting on the right to stop, to see, to choose differently, we have not built an ally.
We have built a war crime that operates without pause.5
References
Header Image generated with ChatGPT
- Makos, A., & Alexander, L. (2012). A higher call: An incredible true story of combat and chivalry in the war-torn skies of World War II. Dutton Caliber.
- United Nations Institute for Disarmament Research. (2014). The weaponization of increasingly autonomous technologies: Considering how meaningful human control might move the discussion forward. UNIDIR. https://unidir.org/publication/weaponization-increasingly-autonomous-technologies
- International Committee of the Red Cross & Stockholm International Peace Research Institute. (2020). Limits on autonomy in weapon systems: Identifying practical elements of human control. ICRC & SIPRI. https://www.icrc.org/en/document/limits-autonomous-weapons
- Imperial War Museums. (n.d.). The real story of the Christmas truce. https://www.iwm.org.uk/history/the-real-story-of-the-christmas-truce
- International Committee of the Red Cross. (2019, June 6). Artificial intelligence and machine learning in armed conflict: A human-centred approach. International Committee of the Red Cross.