This thread has been quiet, but here’s Michael C. Horowitz and Julia M. Macdonald on Lawfare comparing the campaign to stop autonomous weapons from being used (one of them is known as The Campaign To Stop Killer Robots) to the successful anti-landmine campaign. They worry that we haven’t piled up enough bodies yet:
Second, while there was clear evidence of the human costs of the use of landmines around the world, the lack of casualties from the use of lethal autonomous weapon systems muddies the ability of the movement to build public support. The pictures of ordinary people injured and maimed by mines, combined with the casualty statistics, played a key role in shocking and shaming governments to take action. It also bolstered the legal argument that mines violated the proportionality and distinction principles of international humanitarian law. Persuading the international community to ban a technology preemptively without observing these human costs will be difficult. Moreover, autonomous weapon systems are a much broader category than blinding lasers, the only previous technology to be subject to a preemptive ban.
Not only is the international community less emotionally affected, but there remains uncertainty as to whether these weapons would, in fact, inherently transgress international law. It is also possible that autonomous weapon systems might reduce civilian casualties in some cases if they have high levels of accuracy, lack human emotions (e.g. revenge), and do not suffer from the same physical limitations as humans, such as fatigue.
They may have a point. However, if enough bodies are piled up, it may also suggest the lethality of the system – a desirable trait, no?
Their conclusion:
Therefore, while the Campaign to Stop Killer Robots appears to be following a similar playbook as the ICBL, we should be cautious in drawing too many conclusions about the likelihood of a ban on LAWS [Lethal Autonomous Weapon System]. The differences between the two issue areas—in particular, the lack of consensus around the definition of lethal autonomous weapons, uncertainty as to their military effectiveness, and the current lack of human casualties from the use of these weapons—will make attaining a preemptive prohibition on their use harder to achieve. That being said, continued dialogue and discussion about what LAWS are is essential to determining something very important: agreement on the proper role of humans in decisions about the use of force, and how to best achieve that aim.
I cannot help but notice that there is no address of the issue of using a potentially sentient artificial entity as a military recruit may induce ethical objections as well.