Continuing this thread, on Lawfare Gabriella Blum, Dustin Lewis, and Naz Modirzadeh discuss a purported new concept – war algorithms:
How should policymakers, technologists, armed forces, lawyers, and others conceptualize accountability for technical autonomy in relation to war? In a recently-published briefing report from the Harvard Law School Program on International Law and Armed Conflict, we devise a new concept: war algorithms. We define a war algorithm as any algorithm that is expressed in computer code, that is effectuated through a constructed system, and that is capable of operating in relation to armed conflict.
Why focus on war algorithms? The background idea is that authority and power are increasingly expressed algorithmically—in war as in so many other areas of modern life.
They note programming computers for war is not entirely new, but I think they would assert that the algorithms of interest may be more narrowly focused and have less applicability outside of the conflict domain than in previous iterations; whether this is true is beyond my technical expertise. They continue:
The underlying algorithms are developed by programmers and are expressed in computer code. Yet some of these algorithms seem to challenge key concepts—including attribution, control, foreseeability, and reconstructability—that underpin legal frameworks regulating war and other related accountability regimes.
As we see it, the current crux is whether certain advances in technology are susceptible to regulation and, if so, whether and how they should be regulated. In particular, we are concerned with technologies capable of “self-learning” and of operating in relation to war and whose “choices” may be difficult for humans to anticipate or unpack or whose “decisions” are seen as “replacing” human judgment.
Indeed. This sparked some thoughts for myself. Suppose you had a Prisoner of War (POW) encampment. Further suppose you use an AI for security at the POW camp.
One night, the AI massacres the prisoners, claiming it had computed that an escape attempt was imminent, and this seemed the most efficient manner to stop it. This is clearly in contravention of the Geneva Convention.
Who’s punished? Can you punish an AI? Unless it’s reached self-awareness and has a drive for survival, punishment is probably a misnomer; given that an AI would probably be manufactured purely for war purposes, it has no expectation of freedom, or of societal autonomy. What does it mean to punish the AI in that context? Punish the programmers? The commanders who chose to deploy the AI?
Does this just turn into cover for committing offenses against the Geneva Convention?