Volokh’s Chief Justice Robot

Professor Eugene Volokh at The Volokh Conspiracy has a short post out summarizing an article he’s written for the Duke Law Journal.

Introduction

How might artificial intelligence change judging? IBM’s Watson can beat the top Jeopardy players in answering English-language factual questions. The Watson Debater project is aimed at creating a program that can construct short persuasive arguments. What would happen if an AI program could write legal briefs and judicial opinions?

To be sure, AI legal analysis is in its infancy; prognoses for it must be highly uncertain. Maybe there will never be an AI program that can write a persuasive legal argument of any complexity.

But it may still be interesting to conduct thought experiments, in the tradition of Alan Turing’s famous speculation about artificial intelligence, about what might happen if such a program could be written. Say a program passes a Turing test, meaning that it can converse in a way indistinguishable from a human. Perhaps it can then converse—or even present an extended persuasive argument—in a way indistinguishable from the sort of human we call a “lawyer,” and then perhaps in a way indistinguishable from a judge.

It’s an interesting proposition. I’ve been thinking about Volokh’s summation (I haven’t read the actual paper), and I think my criticisms may center around his first point of evaluation:

[1.] Evaluate the Result, Not the Process. When we’re asking whether something is intelligent enough to do a certain task, the question shouldn’t be whether we recognize its reasoning processes as intelligent in some inherent sense. Rather, it should be whether the outcome of those processes provides what we need.

At first glance, this seems fairly reasonable. However, I think there’s some cracks in this thought, and they center not around technical problems (of which I’m not qualified to comment, having taken just one AI course 35 years ago, and been an interested reader since), but, I think, around civil society.

I think one of the charming aspects of democracy for most folk is that we’re not judged by divine or divinely-anointed creatures (aka, those idiot monarchs and their self-interested minions), but by everyday citizens who are judges and jury. People is the operative word, because that’s what we are. This comes from the idea, fallacious as it may be[1], that we share a similar theory of mind. That is, we think we understand how our fellow people reason, how they evaluate evidence, their general moral instinct, perhaps even ethical / moral theory, and the general importance of justice in our culture.

Crucially for Volokh, the odds of his hypothetical project’s complete acceptance by the general public may correlate directly with society’s (or perhaps that should be plural possessive) willingness to include artificial intelligence entities as part of the human social landscape, versus considering them as entities alien to our understanding – that is, entities which do not share our theory of mind.

Volokh’s third point somewhat addresses the issue, even as it’s at odds with his first point, above:

[3.] Use Persuasion as the Criterion for Comparison—for AI Judges as Well as for AI Brief-Writers. Of course, if there is a competition, we need to establish the criteria on which the competitors will be measured. Would we look at which judges’ decisions are most rational? Wisest? Most compassionate?

I want to suggest a simple but encompassing criterion, at least for AI judges’ judgment about law and about the application of law to fact: persuasion. This criterion is particularly apt when evaluating AI brief-writer lawyers. After all, when we hire a lawyer to write a brief, we want the lawyer to persuade—reasonableness, perceived wisdom, and appeals to compassion are effective only insofar as they persuade. But persuasion is also an apt criterion, I will argue, for those lawyers whom we call judges. (The test for evaluation of facts, though, whether by AI judges, AI judicial staff attorneys, or AI jurors, would be different; I discuss that in Part IV.)

Persuasion is, to a great extent, the sharing of the reasoning, the chain of logic, which proceeds from assumptions and facts to a final conclusion. This, in turn, is a reflection, warped in some ways, of how the mind generating the argument is actually working. Thus, this third point seems to be at least partially contradictory of his first point.

Let me speculate on why Volokh wrote point 1, above, in which he asks that only results be judged, not method. This is strongly reminiscent of a facet of Machine Learning (ML) in which the decisions made by algorithms which utilize ML are shrouded in mystery. That is, if an ML-based algorithm that selected whether or not a given applicant would be sold a mortgage was asked how it came to a decision concerning some particular applicant, yay or nay, it’d not be able to explicate its decision. This is a common problem, and it’s not necessarily impossible to fix, but possibly Volokh is aware of how hard this may be able to complete.

But that persuasive element is a key part of analyzing how another entity’s mind works, and deciding whether it’s compatible with our own, or not.

The lure of the objective and untiring “mind” is real, but the question is whether it’s something we can accept, or if it’s the notorious Siren song. To the extent that we can accept artificial minds, I have to wonder how much those minds have to share the same flaws that we suffer from.

And if those minds do cross the rubicon from merely machine-learning algorithms to full-fledged Artificial Intelligences, will they still remain trustable? I’ve discussed the semantic sloppiness of using the term Artificial Intelligence when the algorithm, no matter how sophisticated, exhibits no signs of consciousness or, more importantly, self-interest (for want of a better term).

I suspect that, as assistants to human judges, AI-based programs will be easily accepted, because they’ll remain tools in the hands of humans. But judge and jury are positions of authority and responsibility. Will automating such positions be acceptable to a citizenry accustomed to policing itself?

Time will tell.



1 I suppose President Trump is the outstanding example of the Age of someone who does not have a mind congruent with the general theory of mind, as I’ve noted elsewhere. How many of us are well-acquainted with a creature that lies, boasts, and aggrandizes himself at every opportunity? Not many. Still, we like to think that we understand somewhat how our fellow citizens think.

Bookmark the permalink.

About Hue White

Former BBS operator; software engineer; cat lackey.

Comments are closed.