When it comes to killer robots, researchers are up in arms – sorry about that. Jeremy Hsu reports in Discover’s Lovesick Cyborg that researchers have forced an institution to clarify the purpose of a research project:
It all began in late February when a Korea Times article reported on a leading South Korean defense company teaming up with a public research university to develop military AI weapons capable of operating without human supervision. By March, a group of more than 50 AI researchers from 30 countries had signed an open letter addressed to KAIST, the South Korean university involved in the AI weapons project, that declared the signatories would boycott any research collaborations with the university. …
KAIST quickly responded to the open letter from the international research community. By early April, KAIST President Sung-chul Shin had put out a statement to allay researchers’ concerns: “KAIST does not have any intention to engage in the development of lethal autonomous weapons system and killer robots.” In acknowledgement, the researchers called off the proposed boycott.
The apparent success of the proposed boycott could inspire future campaigns to follow in its footsteps. But many universities would likely think twice about pursuing similar research projects that could lead to lethal autonomous weapons. Russell noted that the AI research community would have to remain watchful.
Unfortunately, questions concerning the inherent instability of a system in which the weapons to be developed are not easily detectable and require little more than computer science research and some minor weaponry development are not addressed. This is in contrast to nuclear weaponry, which is detectable at a distance and requires a certain amount of natural resources. Because of these differences, I hesitate to draw analogies with our more or less successful management of nuclear weapons.