Tesla’s new AutoPilot feature for their car offerings has been involved in its first fatality. First, what is AutoPilot? From the Tesla blog:
It is important to note that Tesla disables Autopilot by default and requires explicit acknowledgement that the system is new technology and still in a public beta phase before it can be enabled. When drivers activate Autopilot, the acknowledgment box explains, among other things, that Autopilot “is an assist feature that requires you to keep your hands on the steering wheel at all times,” and that “you need to maintain control and responsibility for your vehicle” while using it. Additionally, every time that Autopilot is engaged, the car reminds the driver to “Always keep your hands on the wheel. Be prepared to take over at any time.” The system also makes frequent checks to ensure that the driver’s hands remain on the wheel and provides visual and audible alerts if hands-on is not detected. It then gradually slows down the car until hands-on is detected again.
Which is to say it’s not yet foolproof; it may never be. But then, neither are human drivers. Given the supervisory requirement on the human occupant, I wonder if they would have been better advised to name this CoPilot.
So on to their reaction to the incident:
We learned yesterday evening that NHTSA is opening a preliminary evaluation into the performance of Autopilot during a recent fatal crash that occurred in a Model S. This is the first known fatality in just over 130 million miles where Autopilot was activated. Among all vehicles in the US, there is a fatality every 94 million miles. Worldwide, there is a fatality approximately every 60 million miles. It is important to emphasize that the NHTSA action is simply a preliminary evaluation to determine whether the system worked according to expectations.
Following our standard practice, Tesla informed NHTSA about the incident immediately after it occurred. What we know is that the vehicle was on a divided highway with Autopilot engaged when a tractor trailer drove across the highway perpendicular to the Model S. Neither Autopilot nor the driver noticed the white side of the tractor trailer against a brightly lit sky, so the brake was not applied. The high ride height of the trailer combined with its positioning across the road and the extremely rare circumstances of the impact caused the Model S to pass under the trailer, with the bottom of the trailer impacting the windshield of the Model S. Had the Model S impacted the front or rear of the trailer, even at high speed, its advanced crash safety system would likely have prevented serious injury as it has in numerous other similar incidents.
There’s a couple of takeaways here: Tesla’s claim of a first fatality at 130 million miles, which is significantly better than American and world wide averages. Of course, these are small numbers (i.e., 1 fatality) and hardly constitutes an average. But let’s make an unwarranted assumption and assume these numbers will hold up, even if I do feel it’s a little irresponsible to be quoting such numbers so early in the game.
Secondly, the claim that neither AutoPilot nor the driver could see the truck.
Here’s what engineers & scientists are going to see and hear: aggregate accident and fatality numbers and how they compare to traditionally piloted cars. As Tesla points out, the raw data is processed to measure accidents and fatalities, and for engineers who want a clear view of how it performs, this will be the important part, along with questions concerning efficiency (fuel) and throughput (i.e., easing of traffic jams) through coordination of driving.
But how about the rest of the population? In fact, it may be unfair to exclude the engineers and scientists. I’m talking about emotions here. Let’s frame the question right here: your best friend is in a fatal car accident, and you’re sitting at their wake, thinking about their last moments and trying to understand how you’ll feel about it. If they died behind the wheel, perhaps killed through the carelessness of some other driver, it’s going to be a terrible feeling.
But what if they were using AutoPilot, a fully implemented AutoPilot that allowed them to take their attention elsewhere? I don’t know about you, but to me the idea that some automatic driving system may have been responsible for my death puts a chill in my spine. This is because I’m no longer the one in control of the car, nor is it an entity in which I can feel some link, some feeling that I understand how their mind works well enough to trust that they’re well-trained and are doing their best to drive well.
This is an entity built on silicon and Big Data algorithms and who knows how it works? Often even the designers don’t understand the details of particular decisions; how about that mother with her infant?
This is the concern that makes me wonder if we’ll ever really use self-driving cars. Science geeks will look at the numbers and realize that accidents and fatalities have gone down, various efficiencies are up, and this is all working out. But the fight will be with the problem of control and trust of the entity in control. If it’s not understandable, will the average driver be willing to entrust their life to it?
I’m not sure.
With regard to the second point, I wonder if we’re going to expect technology to be better than us. I think the potential is there, but is it right for Tesla to stress that neither AutoPilot nor the driver could see the truck? Are they certain the driver was paying attention at the critical moment? If we do expect technology to outclass us as drivers, will this lead to risky, unjustified behavior?