Preventing Keith Laumer’s Bolo, Ctd

Stanford ethicist Jerry Kaplan wonders aloud in The New York Times why Robot Weapons causes concern:

The authors of the letter liken A.I.-based weapons to chemical and biological munitions, space-based nuclear missiles and blinding lasers. But this comparison doesn’t stand up under scrutiny. However high-tech those systems are in design, in their application they are “dumb” — and, particularly in the case of chemical and biological weapons, impossible to control once deployed.

A.I.-based weapons, in contrast, offer the possibility of selectively sparing the lives of noncombatants, limiting their use to precise geographical boundaries or times, or ceasing operation upon command (or the lack of a command to continue).

Consider the lowly land mine. Those horrific and indiscriminate weapons detonate when stepped on, causing injury, death or damage to anyone or anything that happens upon them. They make a simple-minded “decision” whether to detonate by sensing their environment — and often continue to do so, long after the fighting has stopped.

A couple of problems here:

1. The unstated assumption is that A.I. weapons will remain under the control of their creators.  A fully capable AI, however, is implicitly its own agent, because an implication of the capabilities required in the field is the ability to recognize and choose.  The first of these, recognize, may only require a very limited AI (more of a Big Data application), but the second, choose, depends on both the first and an assessment of the situation; depending on signals from superiors implies both an uncompromised communications channel and an uncompromised loyalty to those superiors.

In general, he could argue that a very limited AI is only required, but the more limited your AI, the less capable it’ll be.  The less capable your weapon, well, most conventional war leaders will believe the less chance they’ll have of winning – or just discouraging the next war.  (The unconventional leaders, a la the Resistance, will, as always, depend on wit more than firepower, if only out of necessity.)  The arms race will inevitably ramp up the AI … and, at some point, it’ll become self-directing – and it may choose not to worship its makers.  He may believe,

Then there’s the question of whether a machine — say, an A.I.-enabled helicopter drone — might be more effective than a human at making targeting decisions. In the heat of battle, a soldier may be tempted to return fire indiscriminately, in part to save his or her own life. By contrast, a machine won’t grow impatient or scared, be swayed by prejudice or hate, willfully ignore orders or be motivated by an instinct for self-preservation.

I disagree, see this post, where I discuss the possible reactions of an A.I. robot facing combat.

2. It’s not clear he understands AI.  His next paragraph reads:

Now imagine such a weapon enhanced by an A.I. technology less sophisticated than what is found in most smartphones. An inexpensive camera, in conjunction with other sensors, could discriminate among adults, children and animals; observe whether a person in its vicinity is wearing a uniform or carrying a weapon; or target only military vehicles, instead of civilian cars.

He describes technology that I would not classify as AI; worse yet (for him, at least), this technology could be applied to the current weapons systems, in many cases, and realize safety gains comparable to his A.I. based systems.  So why develop a potentially dangerous (to its creators) A.I. system?

3. He is stumbling into morally ambiguous territory:

Neither human nor machine is perfect, but as the philosopher B. J. Strawser has recently argued, leaders who send soldiers into war “have a duty to protect an agent engaged in a justified act from harm to the greatest extent possible, so long as that protection does not interfere with the agent’s ability to act justly.” In other words, if an A.I. weapons system can get a dangerous job done in the place of a human, we have a moral obligation to use it.

If our A.I. weapons system is self-aware and thus a moral agent, do we have an obligation not to place it in a situation where it can be destroyed?  Do we have an obligation not to place it in a situation where it could destroy another human, or another self-aware A.I. weapons system?

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.