projects

Justice Peeks

Aug 2016

DALL-E cover image 'Justitia wearing blindfold holding balance scales and a sword, sculpture, digital art' to add some colour to an otherwise quite text-heavy post
DALL-E cover image 'Justitia wearing blindfold holding balance scales and a sword, sculpture, digital art' to add some colour to an otherwise quite text-heavy post

Justice should be given objectively, without fear or favor - irrespective of wealth, status, or identity; completely impartial blind justice. These tenets are enshrined in the law of the land.

This is an inspirational ideal however can we expect all human beings operating within the legal system to be so impartial? We are immersed in our individual experiences, emotions, and inevitable bias. We create the law to celebrate our best characteristics and protect against our worst, but being truly impartial is extremely difficult despite our best intentions.

Legal textbooks reference narratives and stories, lawyers learn to use casuistic reasoning to deliver opinions in the most compelling way learning to manipulate and manage witnesses, the court, and their clients alike. It is my view that the law should not be influenced by anyone involved in the court case in terms of biases through: race, gender stereotypes, accent, status, fluency of speech etc.

Purpose of Judicial Processes in Law

Are Judges to look solely to the language of legislation, then logically deduce its application in a simple syllogistic fashion, as legal formalists/positivists suggest? Or are Judges instead to make decisions concerning seemingly incomputable judicial notions of public good and common sense, as legal pragmatists/realists prefer?

First let us consider the concept of formalism, which has been described in many ways, but with the greatest clarity by Roberto Unger in his critique of formalism. His description of the position he opposes is insightful albeit disparaging.

Unger initially describes Formalism as “striving for a law that is general, autonomous, public and positive”. Some “reformed formalists” such as (Leiter, 1999) whilst accepting this premise of formalism condemn the syllogistic form of “Vulgar Formalism” which Unger goes on to describe as an “A system of rules is formal so far as it allows its interpreters to justify themselves by reference to the rules themselves, without regard to any other arguments of fairness or utility”.

Leiter condemns such as stance because “legal reasoning is not mechanical, [as] it demands the identification of valid sources of law, the interpretation of those sources, the distinguishing of sources that are relevant and irrelevant”. I would like to argue that legal reasoning is mechanical, as it demands those very same things.

The legal rationale is an immanent operation of law; where the implications of the law can be described from a viewpoint entirely composed of the law. This definition seems to fit perfectly with the application of deductive systems and is therefore perfectly attainable by purely mechanical systems. So if this is the model of justice that we are aiming for then an autonomous agent would make the perfect Judge.

This presents Judges simply as “living oracles” of the law. Where their function is to attempt to understand and utilize the maximum relevant amount of legislation. It is the role of government to make law, not the role of a Judge. If a Judge did contrive a new rule of law, then he would be guilty of presuming too much of his position.

Currently, the decisions a Judge makes act as law - in particular, “common law”, as they provide precedents which the defense and prosecution build future cases with. Which in turn gives further weight to the precedent, even if that original decision was not explicit within the law itself. The question is should such “common law” creation be officially condoned? However, this debate is beyond the scope of this paper. Given the usage of “common law”, such case-based reasoning must form an integral part of the programming of artificial agents. However, a more formal logical reasoning based on the legislation rather than previous opinions should represent the core of the agent’s decision-making.

Critiquing Formalism

Returning to Unger’s critique of formalism which states if an artificial agent replaces the judiciary system it needs a sense of public good which is independent of the law. Given the currently limited capability of AI in understanding such abstract concepts as a public good, then we may be compelled to say that human Judges are a necessary resource. This forms the basis of Unger’s most damning condemnation of formalism, which he calls the concept of equity: “the intuitive sense of justice in a particular case”.

The necessity for such reasoning is based upon the idea that in any case where a convoluted legal question arises, any argument supporting one answer from a legislative standpoint can be matched with an opposing doctrine. Therefore since anything can be ‘proved’, nothing can.

The problem we face here is that human beings seem to have as clear an idea of equity as machines do - This is no idea at all! Typically, it is not unusual for human Judges to reach different conclusions when given the same evidence and legislation it should be expected. (Council of Europe, 1993; Hutton, Paterson, Tata, & Wilson, 1996).

Human Inconsistencies

If this notion of “equity” is what separates human agents from artificial agents, could the importance of having a notion of equity be overruled by the implicit biases that the Judges exhibit, leading them to reach different decisions over the same case? Judges typically say that every case is different, but inevitably it must also be true that every Judge is different too. So different that there is a need for sentencing guidelines to ensure “greater consistency in sentencing” (The Sentencing Council for England and Wales, 2016).

In addition, there is further evidence of judicial inconsistencies in sentencing given the need for some cases to be referred to higher courts of appeal. Consider the percentage of tried cases filed for appeal (40.9%) compared to untried cases (19.0%) for supposedly definitive judgments. (Esienberg, 2004)

Many psychological factors can influence our human Judge’s opinions. (Danziger, 2011) One experiment indicates this fact. Something as simple as the time between meal breaks could have a huge impact on decision making as shown below in Figure 3. There was a sudden jump in parole acceptance rates just after a short meal break.

According to this research, Decision fatigue can be overcome by restricting a Judge’s shift to half a day, interspersed with frequent breaks. This would result in a better and more consistent form of justice, but the downside is that there would be less of it. Even then such restrictions would not eliminate the numerous other psychological factors (Dietrich, 2010) that influence our decision-making processes; most of which we are not even aware of. These factors, including experience (Juliusson, Karlsson, & Gӓrling, 2005), cognitive biases (Stanovich & West, 2008), age and individual differences (Bruin, Parker, & Fischoff, 2007), belief in personal relevance (Acevedo, & Krueger, 2004), and an escalation of commitment.

Design Objectives

… Skipped …

Legislation Comprehension

… Skipped …

First-order Logic

… Skipped …

There are No Booleans in Life

The most damming problem with first-order logic in application to legislation is that it is innately binary. There are only absolutes (1 or 0, true or false) in first-order logic, either something is true - being proved by inference rules taken from the axioms, or it is not true because it isn’t proved.

The British Nationality Act of 1981 which we have been using as a foil to first-order logic is an excellent example of where such reasoning is plausible. You can either be a British National or not and the facts of the case, the birthdate of the individual, and other such inputs are reasonable items to take as infallible facts.

This is also the genre of automation that the Government Digital Services group is currently using to transcode from plain English into computational rules which are being used to provide automated approval of various licenses and benefits. Public services such as: registering to vote, renewing patents, student finance, etc. are all examples where the inputs are sensibly taken as true, and there is only a true or false outcome.

Computers are digital.

The real world is analog.

All computational reasoning is produced by the combinations of 1 and 0s in a series of logic gates. This form of Boolean logic is how first-order logic works. Someone is either British or Not British; there are no dual citizens or second-generation distinctions to this approach. It’s all Black and White, not a shade of grey.

This is perfect for some service-based points of law, where we can trust the input received to be true, our inferences to be infallible, and no doubts about the outcome.

Some points of law, however, are not so isolated in absolute Boolean terms. Instead, there are degrees of truth, some things are more likely to be true than others, and there are open-textured terms that are vague in their meaning. So for example, what would happen if we asked a computer to evaluate imprecise propositions such as “T was a little after 4 pm” or “Y demonstrates X’s partial responsibility”?

… Skipped …

Fuzzy Logic

… Skipped …

Practical Application

We have thus far shown how an artificial agent can:

  1. Comprehend the rules in Legislation using predicate logic; having attempted to use Horn Clauses for their unique computational properties.
  2. Weight the value of evidence, and interpret open-textured terms using Fuzzy sets and membership functions based on domain expert advice.
  3. Combine evidence using legislation and common sense rules using Fuzzy logic to produce a non-binary truth-valued result.

We implement this naively with fuzzy horn clause logic based upon data extracted from prior case data using the Natural Language Toolkit.

0.2::burglary.
0.01::earthquake.
0.20::fault.

0.01::p_mistaken.
0.50::p_heard_given_alarm.

alarm :- earthquake.
alarm :- fault.
alarm :- burglary.

hears_alarm(mary) :- alarm, p_heard_given_alarm.
hears_alarm(mary) :- p_mistaken.

evidence(hears_alarm(mary),true).

query(burglary).
% 0.53

There are two main parts that an artificial agent is missing to be able to improve the justice system.

  1. Remove the necessity for domain expert advice in membership functions
  2. Increase the human legibility of the system by providing a completely analyzable account of its reasoning.

Extracting set membership likelihood by data mining

… Skipped …

Understanding Machines

… Skipped …

The Future

Yet despite all this, there is no doubt there is a voice in your head that call for a human judge for that human element.

I certainly have it. What then is this human element that has thus far evaded our reasoning? We logically understand that an automated agent would prevent some of our human bias and inconsistencies, yet there is this emotional resistance that makes us anxious about such a world.

Part of this, I believe, comes from our misunderstanding of the purpose of Judges. They are there to give an impartial decision, not to be swayed by anything other than the facts presented in the case before them. Consider Helen Titchener’s abuse storyline (The Archers, 2016). The listener has had the best part of two years listening to her husband Rob’s behavior escalating from emotional control to emotional abuse to sexual assault. The listener has a fly-on-the-wall perspective of all the circumstances leading up to the stabbing of her husband. Listeners have the greatest sympathy for her actions because they know all the circumstances leading up to them.

A Judge - and artificial agent - however, cannot take that into account unless there are facts that prove that to them (there was a literal bug on the wall). They will prosecute her for stabbing a man based on the evidence. This doesn’t seem “fair” or “just” but that is precisely what it is. The last thing we want is for our Judges to be making decisions devoid of evidence, solely based on emotion.

Another part of this desire for human influence is an “us vs them” mentality. Artificial agents currently reside in what I would call an “uncannily intellectual valley”: they can make useful autonomous decisions based on pure reason. We don’t however, trust them to do such, possibly in part due to the media we consume that paints a picture of malevolent artificial consciousness rather than an impartial artificial agent.

Imagine we are the defendant on trial. In this case, we don’t want to have a cold logical argument that condemns us however equally and fairly to our - possibly - just rewards. Instead, we want the warmth of an understanding - and biased - Judge who can see our remorse on our face, one that identifies with our situation and thus will be kinder towards us.

Unbiased justice and human judges are not compatible.