Quantification in Criminal Courts

Codified Justice or Algorithmic Unfairness?

Md. Abdul Malek
Towards Data Science

--

The intrusion of the predictive analytics or risk assessment tools into judicial settings is exemplified with the implementation of ‘COMPAS’ into the USA (Wisconsin, back in 2012, after years of development since the 1990s). Such algorithmic software uses machine-learning techniques that find patterns or correlations in vast quantities of ‘data’. Judges use them to assess an offender’s likelihood of recidivism while making parole, probation, bail, and sentencing decisions — who is likely to re-offend at some point in the future or to fail to appear at their court hearing.

Since the justice system already has traditional problems that require solutions, the recidivism risk scale as the outcomes of powerful high-tech solutions is being chosen ‘as a potent, pervasive, unstoppable force’. In fact, such risk assessment algorithms are repurposed, (Završnik, 2019)[1] and intended to transform traditional bail and sentencing systems, and minimize ‘human biases that lead to unequal application of laws’ (see Samuel Greengard, 2020). No doubt, such a quantified risk-scoring method to enhance the efficiency and efficacy of the courts’ decision-making process makes a vivid shift in the criminal justice paradigm in the name of predictive justice.

But the existing literature and studies suggest that they instead raise robust concerns — creating a system of codified justice with “bias data produced through histories of exclusion and discrimination,”[2](see e.g., ProPublica, 2016; Dressel & Farid, 2018; Deeks, 2019). Accordingly, such quantification (Angèle Christin, 2015) in the justice sector is, at the same time, alleged to be obviously impactful and “constitutionally, technically, and morally troubling” (Starr, 2014); which would eventually promote unfair practice in the decision-making process. Now, let’s see how and why such a method raises concerns over the effects of quantification and unfair practices in courts.

Codified Justice and Algorithmic Unfairness:

The new trend of the ‘codified-justice’ (see, e.g. Richard M. Re et al., 2019) favors uniformity and standardization above discretion. The proponents for criminal justice algorithms argued that they are preferable because they could increase the courts’ efficiency, accessibility, and consistency by technological means; and also reduce judges’ bias, discretion, and arbitrariness. So, quantification and standardization in justice settings persuade judges to promote the cause of codified justice, although it has the potentials to limit their judicial discretions. In delineating the benefits of limiting discretion, it may be argued that AI algorithms are presented as an easy solution for making judicial decisions more consistent and efficient; which would ‘help to determine judges’ and prosecutors’ accountability for their decisions.

Niu/Unsplash

But a good number of cutting-edge researches and studies have already shown that lack of algorithmic transparency and explainability is seriously impactful in the justice context, and undesirably undermines the due process principle and the principle of fairness in the court practices. In a given justice system, there is no skepticism that the principles of equitable and individualized justice or discretionary moral judgment are of paramount consideration.

Accordingly, the underlying values of the judicial system would be undermined; which “tends to strengthen codified justice at the expense of equitable justice”(Richard M. Re et al., 2019). That’s why, whereas such judicial use of the power of quantification and smart computation arguably makes a shift in judges’ attitudes and courts’ practices, the ideal of equitable principle and justice teaches us to minimize those harms.

However, exponents of predictive analytics also posit that there is still room for sufficiently individualizing a criminal case while dealing with bail, sentencing, and probation. It is so because the recidivism risk scoring might not be the sole basis for a decision to be made, courts still should have the discretion and information necessary to disagree with the assessment when appropriate.[3] But opponents find the problem elsewhere, because ‘such cautions may not works when it favors the quantity of information provided to sentencing courts over the quality of that information’, which eventually could result in a ‘more severe sentence’ (Christin, Rosenblat & Boyd, 2015) based on an ‘unspoken clinical prediction’ (Hyatt, Chanenson, & Bergstrom, 2011).

Furthermore, it is the fact that these scores rely on group data, so the recidivism scores are unable to identify specific high-risk individuals; it rather works on the probability. Additionally, for it lacks transparency, and explainability, little is known about the efficacy of such interventions in the judicial forums.

However, contextualizing the fairness and equal opportunity in the justice systems, it can also be opined as such that algorithmic intelligibility only to experts cannot offer level playing evenness to all its users and subjects in clear terms; which justice institutions, particularly courts, are pledged to do so. Likewise, it may be opined that in issuing these cautions (of unfairness and illegitimacy), ‘the Loomis’ court made clear its desire to instill both general skepticisms about the tool’s accuracy and the tool’s assessment of risks posed by minority offenders’.

In another sense, it is aptly argued that machine-learning algorithms cannot be befitting according to communicative theories of punishment.[4] But such a narrative is not out and out applicable in cases of pre-trial bail, or predictive analytics in policing, which occurs before determining one’s guilt (Chiao, 2019), it is also aptly argued that risk assessment does not per se impair the communicative potential of punishment, or would not necessarily be incompatible with Morris’s theory of ‘limiting retributivism’ (Garrett and Monahan, 2019).

Then again, it is, however, not refutable that sublime considerations of retribution, deterrence, and rehabilitation are not embedded in the current versions of these algorithms (Angèle Christin et al, 2015). Besides, risk-assessment does not sufficiently incorporate causation, instead it ‘emphasizes one major justification at the detriment of the others: incapacitation’ (Harcourt, 2005). Even “there is no persuasive evidence that recidivism assessment tools outperform judges’ informal predictions’ (individual-clinical-judgment to assess risk), or are less discriminatory alternative instruments”(Starr, 2014). Hence, it is of course an issue that such kind of unfairness is ever ignoble in the justice context.

Things to consider:

In conclusion, it is germane to refer to the Electronic Frontier Foundation’s (EFF) proposal which imposes restrictions on the use of scoring systems. It vividly underscores that an actuarial tool should ever be the deciding factor in a decision to detain an individual for such tools can replicate the same sort of outcomes as existing systems that rely on human judgment — and even make new, unexpected errors (Jamie Williams, 2018). Since justice systems are always sensitive ones which must be fair and trustworthy, any AI methods are expected to ‘do more than eliminate bias; they also explain their results, interpret them for users, and provide transparency in how results arrived at’ (Colin Johnson, 2018).

In other words, any defendant should have an opportunity to see and question the ‘data’ used to train the algorithm and extend of ‘weight’ assigned to each input factor, not only the source code (see Chander, 2017. In summation, fairness would also be advanced and promoted in the court proceedings if there is transparency, explainability, and interpretability in the computational methods in use. However, whether algorithms should be used to arbitrate fairness in court decisions is still a complicated question. Thus, there is still a greater question of whether they reduce existing inequities or make them worse.[5]

Notes & References:

[1]See also Aleš Završnik, Algorithmic justice: Algorithms and big data in criminal justice settings, EUROPEAN JOURNAL OF CRIMINOLOGY 1 –20 (2019).[ this paper mentions that “AI tools will achieve more with less, and vaporize biases and heuristics inherent in human judgment and reasoning”, which will, in turn, increase the legitimacy of criminal justice agencies and confine infliction of punishment to ‘pure’ scientific method and ‘reason’].

[2]See Ruha Benjamin, Race after technology: abolitionist tools for the new Jim code (2019) [especially for bias and default discrimination].

[3]State v. Loomis, 881 N.W.2d 764–65 (Wis. 2016), petition for cert. filed, №16–6387 (U.S. Oct. 5, 2016)

[4]See Antony Duff, The Realm of Criminal Law (Oxford University Press 2018).

[5]‌Karen Hao and Jonathan Stray, Can you make AI fairer than a judge? Play our courtroom algorithm game, MIT Technology Review (2019). [ it is vividly argued that since the notion of fairness means different things in different contexts, it is so in mathematical spheres too. Karen Hao et al. aptly instantiated two definitions of fairness: keeping the tolls’ error rates comparable between groups, and treating people with the same risk scores in the same way. It is then argued that ‘both of these definitions are totally defensible! But satisfying both at the same time is impossible’.]

--

--