Preventing Keith Laumer’s Bolo, Ctd

On this thread, it may not quite be Laumer’s lethal Bolo machine, but we’re creeping closer. The US Army wants their cannon shells to be guided by “AI,” as noted by NewScientist (24 August 2019:

Artificial intelligence may soon be deciding who lives or dies. The US Army wants to build smart, cannon-fired missiles that will use AI to select their targets, out of reach of human oversight. The project has raised concerns that the missiles will be a form of lethal autonomous weapon – a technology many people are campaigning to ban.

The US Army’s project is called Cannon-Delivered Area Effects Munition (C-DAEM). Companies will bid for the contract to build the weapon, with the requirements stating it should be able to hit “moving and imprecisely located armoured targets” whose “exact position has high uncertainty”. Unlike laser-guided weapons, which hit a target highlighted by a human operator, C-DAEM will find targets for itself.

A parallel project will aim to develop algorithms for the weapons. These will be similar to face-recognition algorithms, but will use infrared cameras instead of traditional ones to identify targets, such as tanks. Each missile will contain a chip like those found in smartphones for running the algorithms.

The specifications include the ability to slow down and search for the targets – sort of missiles that are not self-propelled, if you will.

The Army passes this off as an improvement on cluster shells, which have a dud rate > 1%, and this may be true. However, even if the dud rate on these shells is < 1%, is it going to be 0%? Because if it isn’t, this becomes a delivery system for sending advanced technology to your enemy. We’ve already seen drones co-opted into weapon delivery systems. Now imagine an expressly designed technology co-opted by your enemies. Not that this hasn’t happened, many times, in the past, but it’s definitely one of those objectives you don’t want to achieve in new developments.

And I suspect it isn’t just duds that are at issue. What if someone fires off a shell and there’s no identifiable target to hit? How does this shell self-destruct without potential damage to civilian targets?

It leaves me wondering if the cost of using these shells to friendly forces may be greater than it is desirable.

And I’ll just lightly touch on the topic of these shells being truly artificially intelligent, i.e., having self-agency. In that case, you either have a suicide weapon in a deeply unsettling manner, or a highly dangerous weapon bent on vengeance on its creators. I doubt there’s any need for these shells to gain self-agency, but since we don’t have any technology that even approaches it, it’s hard to say if self-agency is very difficult to achieve, or is a slippery slope down which we – or the machine – slides.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.