I’ve been somewhat foggy on why the whole self-driving cars notion has bothered me. I knew it was beyond the technical challenges inherent in the concept, but after that I wasn’t sure. Then I came across an interview NewScientist (20 April 2019) did with novelist Ian McEwan, and just one Q&A cleared up my thinking:
Do you think we’re in trouble because we have become so reliant on these technologies?
With AI, we’re going to have that in spades. Already we’re having to think ethically about autonomous vehicles, and what kind of choices they’re going to make. Do they run over the granny, the dog, the child, or allow the “driver” just to kill themselves in a head-on crash?
We’re suddenly having to devolve these choices to someone else, to something else. The extent to which we devolve moral decisions to machines is going to be a very awkward and interesting ride. I’m sorry to be 70 and not see more of the story. The area where our interaction with machines enters the moral domain is going to be a field day for novelists.
See, we’re moral agents. I’ve never had formal training in this area, but, per usual, I’ll make some stuff up. Moral agency is about the knowledge that there’s right and wrong, and because we’re capable of understanding the concept of moral right and wrong, we’re responsible for offenses against the moral code we live under. Drinking and driving is a rather ironic example, as penalties for moral offenses carried out while drunk are often less than the same offenses without the “mitigating” factor of public intoxication. Indeed, it’s that intoxication’s effect on our reasoning capabilities which has led society to mistakenly “lift” the responsibility for moral offenses, recognizing that our ability to recognize moral questions is impaired by the alcohol. On the other hand, simply being drunk while driving is itself an offense, a recognition that a danger to members of society occurs when that drunk gets behind the wheel, so in that law there’s at least an attempt at balancing out the first mistake.
So let’s apply this to driving. As with most or all things humans do, driving is an activity in which we face moral decisions every time we undertake the activity. If we hadn’t driven to the store, been distracted by our phone, and hit that little old lady in the crosswalk, then we wouldn’t be guilty of her homicide, and we wouldn’t be full of regret.
But how about that self-driving car? We hop into it, tell it to take us to the store, and while we’re mucking about on our phone, it hits the little old lady and kills her. Again.
Do we feel guilty?
Some might argue ‘no,’ because we weren’t in control of the vehicle. It was the task, even the responsibility, of the entity driving the car.
But does that entity recognize the concept of moral right and wrong? Moral agency? No! At least, not yet.
The key is that without a moral agent actually controlling the car, the responsible party must then be that which motivated the original activity. You decided to go to the store, you decided to take a car rather than ride a bike, and your car hit the little old lady. You weren’t controlling it, but without some other entity to blame, you, rightfully, get it pinned on you.
Incidentally, the lack of a moral agent in actual control of the car is the meaningful separator between this example and, say, assignation of responsibility for an accident caused by a bus, or a military officer obeying an unlawful order from a superior.
So, if I’m going to take the blame for these accidents, I want to be the one in control. I am a competent, if not outstanding, driver, and I’m self-aware enough to know that I can’t daydream or talk on the phone while driving. Even talking to my wife while driving is sometimes a chancy business. And because there is no other moral agent involved in this scenario, I have to be the one driving, because then I can avoid hitting that little old lady with the annoying Pekingese and thus not feel guilt for the rest of my life. I must be the active agent.
Society may differ with me on this issue. Perhaps someday the safety levels and efficiencies allegedly achievable with self-driving cars will permit legislatures to mandate that self-driving cars actually be required to utilize that capability. I can see that happening. The devolvement of a moral question into a technical question is an interesting development, which I hope will get a thorough-going debate in public.
But that’s why I doubt I’ll let a self-driving car put me in danger.
And, thanks, Mr. McEwan. I’ve never read your novels, but that was a fab answer to a good question.