Killer robots have been bubbling under the radar of late, but Hayley Evans and Natalie Salmanowitz report for Lawfare on an upcoming meeting in March of the U.N.’s Group of Governmental Experts (GGE) to discuss developments in the field of lethal autonomous weapons systems (LAWS), as well as give the reader some background.
The August meeting was notable for two final reasons. First, according to commentary on the meeting by Reaching Critical Will (the disarmament division of the Women’s International League for Peace and Freedom and a frequent commentator on CCW meetings), the U.S. and Russia shocked other members of the GGE by doubting the relevance of international human rights law to autonomous weapons systems—even though prior GGE meetings appeared to take the applicability of such law as a given. In response, multiple states—such as Costa Rica, Panama, China and Cuba—pushed back, proposing a variety of solutions ranging from maintaining an explicit reference to international human rights law to mentioning the U.N. Charter. The GGE’s report—per the recommendation of China—“affirmed that international law, in particular the United Nations Charter and [IHL] as well as relevant ethical perspectives, should guide the continued work of the Group.”
When it comes to denying the applicability of international humans right laws, I’m sort of left with two motivations.
- The applicable laws would interfere with deployments of the weapons systems in question. How this might be, I’m not sure, but I’m more or less completely ignorant of the field.
- They’re laying the groundwork for the idea that non-human entities, artificial or not, sentient or not, are not bound by human law. I suspect that, in technical terms, there would be some simplification of the task at hand if there’s no need to even pretend to comply with international law.
Second, much of the GGE’s debate centered on broader messaging concerns. Whereas some states, like the U.S., urged the GGE to discuss the benefits of LAWS (such as the capacity for greater targeting precision and less collateral damage), others fervently opposed any mention of such benefits absent an accompanying explanation of the associated risks. Similarly, a handful of states stressed the importance of “avoid[ing] the image that states believe” LAWS “are already in operation”—or “that these systems will be in operation one day.”
My impression is that the risks of advanced weaponry are far less well understood than the benefits – and, sometimes, the one is the obverse, or even the cost, of the other.
But the entire idea of messaging strikes me as a trifle absurd. Public opinion will have little effect on those who are responsible for the deploy / no-deploy decision in the field. The technology is coming available, and, because tools are not moral agents, it’ll continue to be developed for positive social ends – and quietly be co-opted by munitions manufacturers as needed.
Will we end up in another MAD (Mutually Assured Destruction) world again? I think, given overpopulation and basic human biological urges, the answer will be yeah.