Dustin Lewis, Naz Modirzadeh, and Gabriella Blum report on Lawfare that the Pentagon is moving into the killer robots field, which they call algorithmic warfare:
In April 2017, the Pentagon created an “Algorithmic Warfare Cross-Functional Team,” pending a transfer of $70 million from Congress. The premise of this initiative is that maintaining a qualitative edge in war will increasingly require harnessing algorithmic systems that underpin artificial intelligence (AI) and machine learning (ML). This realization is not unique to the United States: while the Pentagon’s algorithmic-warfare team gets up and running, other countries are also seeking to integrate AI and ML into various military functions. As armed forces race to secure technological innovations in these fields, it is imperative to match those developments with sound regulatory responses.
The broad remit of this new Department of Defense (DoD) team—to consolidate “existing algorithm-based technology initiatives related to mission areas of the Defense Intelligence Enterprise”—underscores that it is not just weapons that are of interest; far from it. Think logistics, communications, situational awareness, and intelligence collection management, among many other possibilities. And a May 2017 report from the Hague Centre for Strategic Studies explains that other countries—including China and Russia, as well as several traditional U.S. partner forces—are also pursuing an edge through diverse algorithmically-derived functions related to war.
They note that some nations have preceded us down this path, most notably China. The first task is monitoring video feeds from current battlefields. They worry:
Without context, a mere “data-labeling effort” might sound benign. But the setting for this Pentagon Team’s first assignment is reportedly U.S. operations directed against ISIS (and others) in Iraq and Syria. “Labeling” such data may implicate an array of IHL/LOAC concerns, such as the status of the individual under scrutiny: Does he or she qualify as a combatant, as a civilian, as a member of an organized armed group, as a civilian directly participating in hostilities, as religious personnel, as medical personnel, or as something else? The stakes are extremely high as, under IHL/LOAC, status is a key determinant for whether an individual may be subject to targeting in direct attack. In some cases, the determination of status is relatively straightforward. In many others, however, it can be very difficult.
For human operators as well, of course. I’m not sure I’d tag this as something to be especially worried about. The computer as a generic multiplier worries me more.
In related new, the Campaign to Stop Killer Robots is disappointed that a scheduled conference on the subject has been canceled:
The Campaign to Stop Killer Robots is deeply disappointed that the Convention on Conventional Weapons (CCW) has cancelled a crucial week of formal discussions on fully autonomous weapons in August. This step was taken because of the failure of several states, most notably Brazil, to pay their assessed dues for the convention’s meetings. …
… on 30 May, the CCW’s president-designate Ambassador Matthew Rowland of the UK announced that the Group of Governmental Experts meeting scheduled for 21-25 August has been cancelled due to a lack of funds. Rowland issued several warnings that that the lack of payment of assessed financial contributions would mean the likely cancellation of CCW meetings planned for 2017.
Several countries have financial arrears from previous years, but according to the UN’s official summary, Brazil accounts for 86 percent of the outstanding contributions due to four core humanitarian disarmament treaties, including the CCW. Brazil last paid its assessed CCW contributions in 2010. The Campaign to Stop Killer Robots has appealed to Brazil to pay its outstanding contributions without delay and it challenges CCW states to achieve cost saving measures in other ways that do not require the cancellation of key meetings.