These Are Not New Problems

Clare Garvie of Georgetown Law’s Center on Privacy & Technology remarks on the employment of facial recognition software in the pages of WaPo:

Imagine attending a public gathering — a political rally, an immigration-policy protest or a pro-life march — and police officers walk through the crowds demanding each attendee show identification. You would be justified both in your outrage at this intrusion and in refusing to comply. In this country, a police officer needs to suspect you of committing a crime before stopping you on the street and requiring an answer to the question: “Who are you?”

Face-scanning surveillance does away with this. The technology enables a world where every man, woman and child passing by a camera is scanned, despite no prior suspicion of wrongdoing. But their faces are nonetheless compared against the profiles of criminals and other people wanted by the police. It enables a world where people can be identified and tracked from camera to camera throughout a city — simply because they chose to get a driver’s license.

But, speaking as a software engineer, there’s an implicit assumption that each face in the crowd is being recognized. This is not necessarily how the system must work. Imagine – and this is easy for me to imagine – a system which has been programmed with the faces of those wanted for their alleged association with criminal activities, and flagging those who match those parameters set for the scan, while those who do not match are simply ignored.

It’s very easy to imagine a court hearing testimony from a forensic software engineer, testifying that a system is, or is not, designed in the above manner, and permiting or not, respectively, testimony deriving the facial recognition system. Why?

Because that’s how “it” works today, where “it” consists of police officers searching for suspects on foot. Those they don’t recognize as being the suspects or witnesses of interest are ignored and probably not recognized at all, unless they are acquainted with the officer. If we’re going to be very technical about this, the software is somewhat better than the police officer, although since the officer is unlikely to use this incidental information to track the movements of honest citizens, the point is exceedingly fine and can be ignored.

But, as she mentions, and as my long-time readers know, China is using this sort of software to track their entire population, and for advocates of freedom, it’s a nightmare situation. One wonders how the Chinese feel about it.

Garvie then indulges in an incomplete observation:

And what happens if a system like this gets it wrong? A mistake by a video-based surveillance system may mean an innocent person is followed, investigated, and maybe even arrested and charged for a crime he or she didn’t commit. A mistake by a face-scanning surveillance system on a body camera could be lethal. An officer, alerted to a potential threat to public safety or to himself, must, in an instant, decide whether to draw his weapon. A false alert places an innocent person in those crosshairs.

Facial-recognition technology advances by the day, but problems with accuracy and misidentifications persist, especially when the systems must contend with poor-quality images — such as from surveillance cameras.

Sure. And how does this differ from a human eye-witness? It doesn’t. Garvie should admit to the point and submit an analysis which compares the endpoints of mistakes by human and software, along with rates of false identification.

As facial recognition systems come closer and closer to being autonomous artificial intelligence system, there will be some important questions raised about the transition from being a hammer to an intelligent agent. But I think the points made here are tangential to those important questions, which are not fully developed in my mind.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.