You’ve probably heard about this, but I’ll mention it anyways, from NewScientist:
The algorithms that ride-hailing companies, such as Uber and Lyft, use to determine fares appear to create a racial bias.
By analysing transport and census data in Chicago, Aylin Caliskan and Akshat Pandey at The George Washington University in Washington DC have found that ride-hailing companies charge a higher price per mile for a trip if the pick-up point or destination is a neighbourhood with a higher proportion of ethnic minority residents than for those with predominantly white residents.
“Basically, if you’re going to a neighbourhood where there’s a large African-American population, you’re going to pay a higher fare price for your ride,” says Caliskan.
Uber & Lyft are not happy:
“We recognise that systemic biases are deeply rooted in society, and appreciate studies like this that look to understand where technology can unintentionally discriminate,” said a Lyft spokesperson. “There are many factors that go into pricing – time of day, trip purposes, and more – and it doesn’t appear that this study takes these into account. We are eager to review the full results when they are published to help us continue to prioritise equity in our technology.”
“Uber does not condone discrimination on our platform in any form, whether through algorithms or decisions made by our users,” said an Uber spokesperson. “We commend studies that try to better understand the impact of dynamic pricing so as to better serve communities more equitably. It’s important not to equate correlation for causation and there may be a number of relevant factors that weren’t taken into account for this analysis, such as correlations with land-use/neighborhood patterns, trip purposes, time of day, and other effects.”
I wonder if we’ll be seeing their proprietary algorithms and databases, of which the latter may be more important than the algorithms, be stripped of their protected status. Or perhaps the courts will be appointing “special masters” to study the systems and determine why they’re discriminatory.
And then see the companies ordered to “make them right.”
And then the see the companies to cheat on the test, much like Volkswagen did a couple of years back on efficiency tests.
Or am I too cynical?