The Point Of Ethics Controls Its Content

Too often, systems of morality and/or ethics (which I’ll shorten to ethical systems to save the fingers) are often taken to be semi-arbitrary masses of rules, which are obeyed, or not, without a great deal of thought as to the reasons behind the strictures – and whether or not those reasons are truly timeless, or if they’re actually context-dependent. This is an important, and perhaps underappreciated, aspect of Artificial Intelligence development. I was recently struck by this in an article on the Trolley Problem in NewScientist (27 October 2018, paywall). The Trolley Problem is a thought experiment in which someone is given the choice between who, based on category, is to be killed by a runaway trolley, in order to save others.

This has become interesting for AI investigators as the somewhat silly development of driverless cars careers along, and someone decided to do a world-wide survey:

Overall, people preferred to spare humans over animals and younger over older people, and tried to save the most lives. The characters that people opted to save least were dogs, followed by criminals and then cats (NatureDOI: 10.1038/s41586-018-0637-6).

Edmond Awad at the Massachusetts Institute of Technology and his colleagues think these findings can inform policy-makers and the experts they may rely on as they devise regulations for driverless cars. “This is one way to deliver what the public wants,” he says.

The team found that people in regional clusters made similar decisions. In an Eastern cluster, which included Islamic countries and eastern Asian nations that belong to the Confucianist cultural group, there was less of a preference to spare the young over the old, or to spare those with high status. Decisions to save humans ahead of cats and dogs were less pronounced in a Southern cluster, which included Central and South America, and countries with French influence. The preference there was to spare women and fit people.

Many technology researchers and ethicists told New Scientist they thought the results shouldn’t be used to set policy or design autonomous vehicles because that would simply perpetuate cultural biases that may not reflect moral decisions.

As if there is one universal moral system (and, irrelevantly, it can be applied at highway speeds in crisis conditions). And whose will it be?

Look, ethical systems don’t exist for giggles, but to facilitate societal survival. Generally, we see this as set of rules for inter-personal interactions, wherein we have rules for how we treat each other. However, some rules are not oriented on this basis, but rather on how to value the individual in a crisis situation.

Think of it this way: the potential, skills, and talents of an individual are the principal parts of the value that individual brings to the society. Those first three parts are obviously variable, and while, no doubt, many folks who think simple existence is miraculous are squealing at me now, the Universe has rarely, if ever, put much value on simply existing. And the point of the Trolley Problem exercise is to understand how a society values its citizens (among other, more interesting, questions).

But there’s a fourth variable in my observation, and that’s society. Yes, societies do differ, and are forced to differ, in so many ways, from geography, to natural resources, to the skill set of the average inhabitant, to the fertility of the inhabitants. Most of these are going to have an impact on the society’s ethical system, mostly subtle. Let’s pull out a coarse example.

Suppose Inhabitant A knows how to make bronze, an important part of the armaments necessary to defend this society from the predations of the barbarians on the other side of the mountains. Now let’s put him in the Trolley Problem. He’s gotten his foot stuck in the track, here comes the trolley, and on the other fork of the track is … a bunch of children in similar straits!

Do you sacrifice A or the kids?

Well, I left out some key information: how many other citizens know how to make bronze? Many? Save the kids might be the right answer. But what if only him and maybe his hermit half-brother know how to make bronze, and we’re not sure about the hermit?

Maybe those kids shouldn’t have been playing on the tracks, eh? “A” may be critical to this society’s survival.

If you obsessively attempt to apply your native society’s moral system to that situation and kill the guy with the knowledge of how to make bronze, you may have just doomed that society.

Ethical systems exist to help societies survive, and the context societies exist in can differ. So when I see these ethicists solemnly proclaim that you can’t use that survey to construct the moral system of your AI system, it tells me these ‘experts’ have persistent blinders. I’m not sure these ‘experts’ really even have a clue.

Maybe professional philosophers would be a better choice, although no doubt the ethicists think they are professional philosophers. But from this angle, I don’t see it.

And I shan’t even guess as to how to implement this moral system for the driverless car so it works acceptably well in various societies. Not even a fucking hand-flap.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.