Scoooooooooooooooor- No, The Red Herring Blocks The Shot!

Joshua Howego of NewScientist (12 January 2019, paywall) reports on what may be a case of picking the wrong metric for measuring progress towards success, a problem I expect to be common as “AI”, or rule production systems, are deployed more and more:

A police force in the UK is using an algorithm to help decide which crimes are solvable and should be investigated by officers. As a result, the force trialling it now investigates roughly half as many reported assaults and public order offences.

This saves time and money, but some have raised concerns that the algorithm could bake in human biases and lead to some solvable cases being ignored. The tool is currently only used for assessing assaults and public order offences, but may be extended to other crimes in the future.

When a crime is reported to police, an officer is normally sent to the scene to find out basic facts. An arrest can be made straight away, but in the majority of cases police officers use their experience to decide whether a case is investigated further. However, due to changes in the way crimes are recorded over the past few years, police are dealing with significantly more cases.

It’s a bit of a relief that they’ve severely limited the scope.

Setting the goals of these algorithms is perhaps the most important part of the development and implementation process, isn’t it? Let’s take the above example: is our goal simply to increase our percentage of solved crimes by discarding the those crimes that are hard to solve?

What if those hard crimes were all the murders in the city?

Residents aren’t counting crimes solved, because while jaywalking, for example, has important consequences for traffic flow, people really don’t care about it as a crime, unless they’re an environmentalist who believes cars have become the illegitimate dominant life form of American cities.

If a series of high-profile murders occurs, this frightens the residents. The fact that they’re hard to solve should not militate against working them.

In the end, this is an scarce resource allocation issue, isn’t it? First we have to understand the goal of the system, which might be best stated as a calm populace. Then we have to understand what alarms residents vs what they can put up with, and we have to understand that may change over time. Only then can an  effective resource allocation system be developed. The system described above sounds a bit half-assed, doesn’t it? Or at least not based in reality.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.