Legal Certainty and the Possibility of Computer Decision Making in the Courtroom

Written by Viviane Lindenbergh (2018), as a Law Bachelor’s thesis at VU University Amsterdam.

Viviane Lindenbergh
Towards Data Science

--

Will computers take over our jobs? During the industrial revolution, many feared that automation would result in mass unemployment in industries that relied on manual labour. The development of more sophisticated robotics and artificial intelligence brings about a similar discussion, only now intellectual jobs are at risk as well. This includes many professions in the legal industry, like lawyers and judges (Vermeulen & Fenwick, 2017). However, the Dutch judicial system seems confident that they will not be replaced by computers anytime soon. In 2017 the Council for the Judiciary did an experiment together with LexIQ. Two cases were assessed by a judge and by a computer program. In both cases, the computer came to a less satisfactory conclusion than the judge (De Rechtspraak, 2017b). Still, it seems inevitable that AI will become more and more important in the future. The Dutch Public Prosecution Service uses the software BOS-Schade to help assess the amount of damage in less serious criminal cases. This is a basic form of computer assistance that is already being used (Openbaar Ministerie, accessed 11 June 2018). On the other hand, the fear remains that the increased use of automated systems can violate people’s rights. See for example art. 22 of the General Data Protection Regulation that recently entered into force: a person has a right “not to be subject to a decision based solely on automated processing” (General Data Protection Regulation, art. 22).

An important underlying value in the Dutch legal system is legal certainty. It holds that the actions of the government should be predictable. However, this does not mean that judges always have to adhere to the exact letter of the law. Sometimes, lawmakers intentionally leave room for interpretation or even allow judges to disregard the letter of the law in order to ensure a reasonable and fair outcome.

An important advantage of computers is that they seem to be able to make neutral decisions. This means that from the point of view of legal certainty, decisions by computers would be preferable to decisions by human judges. The main question of this thesis is: “From the perspective of legal certainty, to what extent should computers make legal decisions in the Dutch legal system?”. To answer this, some key questions will be: do computers truly make judicial decisions more certain for citizens? And is legal certainty always desirable?

This thesis is structured as follows. I will first look at some of the ways that computers are already being used in the judicial system. Next, I will look at an important feature of legal certainty: consistent decision making. Then, I will look at a more practical aspect: biases can undermine legal certainty, even if these biases are applied consistently. After that, I will look at a more subjective aspect of legal certainty: transparency of decision making in the eyes of citizens. For each topic, I will try to determine the influence that computers could have on that aspect of legal certainty. Finally, I will give a conclusion and refer to possible future research.

1. Current Status

1.1 Decision making and computer systems

The goal of a judge depends on the type of case they are dealing with. In criminal law, the goal is truth finding, and then finding an appropriate response to that. On the other side, in civil law, the goal is to settle a dispute. This will often involve finding out what really happened, but it is also possible for parties to construct their own legal reality. The judge’s task is to resolve a conflict based on the facts brought on by both parties, not to find out if undisputed claims are factual. The ‘truth’ is not something that can be determined with certainty. In general, a judge will assume certain facts to be true for the sake of that case and base a decision on those facts.

There are many different ways in which computers could help in making legal decisions. The most important distinction to make is about to what extent the computer takes over the decision making. Noortwijk & De Mulder describes three gradations of computer systems in the legal system. Legal tech 1.0 are systems that intend to merely help humans in making decisions. An example of this is a linked data system used for legal research. Legal tech 2.0 takes over a large part of the decision making process, while humans still have the final say. An example would be the automated drafting of contracts. Finally, legal tech 3.0 takes over the process completely, as systems make decisions autonomously, without human assistance (Noortwijk & De Mulder, 2018, para. 5).

This distinction does not indicate what method is used exactly to achieve the goal. An apparent way to do this would be big data analysis, like data mining. By using data mining algorithms, it is possible to look for patterns in vast amounts of data (Calders & Custers, 2013, p. 28, 32).

1.2 Examples

The Dutch judicial system is already experimenting with all kinds of digital technology. Litigating digitally is gradually being made mandatory, and there was even an experiment done recently that compared the judicial decision of a computer to the decision of a human judge (Heemskerk & Teuben, 2017, chapter 10; De Rechtspraak 2017b). In a report, the Council for the Judiciary stated the importance of experimenting with ‘smart technology’, albeit carefully and on a small scale (De Rechtspraak, 2017a, p.44–45).

Outside of the Netherlands, legal tech is used as well. For example, in the US the risk-assessment tool COMPAS is used, among other things, as an aid for sentencing in criminal cases. Variables such as substance abuse, criminal attitude and prior criminal involvement are used to determine whether a person is high or low risk. A judge can then use this score in sentencing (Equivant, 2017, chapter 4). The usage of this tool in sentencing is not entirely uncontroversial, as it might be biased against black defendants (Angwin, Larson, Mattu & Kirchner, 2016).

In the Netherlands, big data is not yet being used for decision making by judges at this time, but more and more data is being collected in digital systems and this lays the foundation for using big data analysis in the future (De Rechtspraak, 2017a, p.44–45). Apart from the actual implementation of digital systems, research is being done into predicting legal decisions. Researchers from the UK have tried to predict decisions by the ECtHR by using natural language processing and machine learning. The predictions achieved a 79 percent accuracy rate in predicting whether or not there would be a human rights violation. The method of analyzing text to make predictions seems effective, though the researchers did not intend to design a system that can fully take over the judge’s job (Aletras, Tsarapatsanis, Preoţiuc-Pietro & Lampos, 2016).

Aside from technology in the judicial system, many law firms are exploring the possibilities that computers can offer as well. Law firms are developing software to engage more with their clients, and to make routine actions easier for lawyers. Using computer systems can improve efficiency and is often valued by clients (Rietbroek, 2017).

2. Consistent Decision Making

2.1 Defining legal certainty

An early description of legal certainty dates back to around 350 BC, in texts by Aristotle. He described that being ruled by laws is preferable to decisions being made per individual case, as “laws are made after long consideration, whereas decisions in the courts are given at short notice, which makes it hard for those who try the case to satisfy the claims of justice” (Stanford Encyclopedia of Philosophy, 2016, para. 3.1). Locke describes in the context of his social contract theory that whoever is in power “is bound to govern by established standing laws, promulgated and known to the people, and not by extemporary decrees” (Locke, 1689, para. 131). Radbruch describes three core values of law: justice, expediency and legal certainty. The most important value in the application of the law by a judge is legal certainty, as it is more important that there is a legal order at all, than that there is justice and expediency within that legal order. However, all three values are important and there should be a balance between them (Radbruch, (1932) 1950, p. 108–111, 118–119).

The principle of legal certainty encompasses many different requirements, such as: the government should only act within the boundaries of the law, judges should be independent, and the law should be publicized. Furthermore, judges should apply the law, and should not judge on the basis of their personal opinions (Stanford Encyclopedia of Philosophy, 2016, para. 5.1–5.2; Van Ommeren, 2003, p. 7–22; Scheltema, 1989, para. 2.1). However, there can be important reasons not to follow a law. Protecting fundamental rights sometimes means bypassing national laws. This could be seen as a clear infringement of the principle of legal certainty, since the law is not followed. On the other hand, citizens will most likely expect the law to be interpreted in such a way that their basic rights are guaranteed. However the law is applied, it is always important that this happens in a somewhat consistent manner (Soeteman, 2010, para. 7.6). Dworkin refers to this stability in the law as ‘integrity’. Applying the law in a consistent way makes a community cohesive as it creates continuity throughout the community, and also through time (Dworkin, 1986, pp. 225–275; Marmor, 2012, chapter 9).

All in all, legal certainty is a value that is aimed at citizens of the state: it provides a protection against the power of the government and it induces trust in the government, which makes the system more stable. The objective side of this value is that the system of the law should always be followed; decisions should not be made at random. There is a subjective component as well: citizens should have the sense that their government is trustworthy (Stanford Encyclopedia of Philosophy, 2016, para. 5.1).

2.2 Achieving legal certainty

Even though the Netherlands does not have a system of legally binding precedent, the law will generally be interpreted in a uniform way because of the system of appeal and cassation. When the Supreme Court interprets a rule in a certain way, lower courts will generally follow this (Soeteman, 2010, p. 46–47). This means that in general, the law will be applied in the same way by every judge, in every case. However, it is not customary in every area of the law to appeal a case, and even on higher levels in the judicial system, judges might apply the law in a slightly different way in similar cases. In some cases, the law is very unambiguous, and judges will most likely reach the same conclusion in every case. There is a high degree of certainty in these cases: the outcome is clear if you simply follow the law to the letter. For these types of cases it should not matter whether the decision was made by a human or by a computer, though the latter would be more efficient (if widely used). Similarly, there are cases where the law is vague and yet the outcome is very clear. Take for example the standard for evidence in criminal cases. According to article 338 of the Dutch Code of Criminal Procedure, a suspect can only be found guilty if the judge is convinced of their guilt on the basis of the available evidence. Apart from this, the threshold for finding a suspect guilty is quite low: in general two pieces of evidence are sufficient. Although the standard from art. 338 is vague, in individual cases the outcome can be very clear. For example: when there are many witnesses, the suspect is caught red handed holding a bloody knife and confesses to the crime, everyone will likely come to the same verdict that the suspect is the one who committed the murder.

It becomes more complicated when the evidence is not as clear. Then similar cases can very well be treated differently, as both options (guilty or not guilty) are equally favourable. A possible compromise when the opinions are divided on an issue would be to randomly decide whether someone is guilty or not. In practice, the outcome in a situation like this will indeed be more or less random. Dworkin talks about ‘checkerboard’ solutions: laws that treat similar cases differently because the opinions are divided. In most circumstances people will instinctively deem these type of solutions as unfair (Dworkin, 1986, pp. 179–183). The same could be said for judgements. From a point of view of legal certainty, it would be preferable that every decision would be the same, regardless of whether that decision is marginally better or worse than the other option.

Where a judge could come to two different conclusions in equal cases, a computer would always give the same verdict when the input is identical. Where every human judge makes decisions in a unique way, computer systems can be easily duplicated. This means that by using the latter, the same type of decision making can be employed throughout the country, and could be stable throughout time.

2.3 Limitations of legal certainty

However, is legal certainty always desirable? Some philosophers have spoken out against following rules strictly. For example, Plato has said that this practice is “like a stubborn, stupid person who refuses to allow the slightest deviation from or questioning of his own rules, even if the situation has in fact changed and it turns out to be better for someone to contravene these rules” (Stanford Encyclopedia of Philosophy, 2016, para. 7). Even Radbruch argued that, although in principle the positive law has to be followed, when the law is extremely unjust, it cannot be seen as valid (Bix, 2013, para. 5.2). In individual cases, applying the law to the letter might indeed result in unjust outcomes. Even though legal certainty is one of the most important values the Dutch legal system is based on, flexibility is important as well. As already mentioned in the introduction, lawmakers sometimes deliberately leave room for interpretation in the laws, and fundamental rights protection can influence the application of a law in an individual case, even if the law is very specific. The latter could be seen as a form of legal certainty, namely that particular general principles are always protected, no matter what. On the other hand, it could be seen as uncertain, since the specific laws for that situation are not applied. Fundamental rights are to some extent unchanging in principle, so from a point of view of consistency, it would make sense to follow the former interpretation. However, in reality, even fundamental rights are subject to changes in society and its application is not always without controversy. Furthermore, where these rights are usually vaguely formulated, the lower legislation is often a lot more specific, and thus gives citizens more certainty over how the law will be applied in their specific situation.

Even though deviating from a specific law would not be beneficial for legal certainty, it could lead to a more just outcome. An example from criminal law that illustrates the importance of deviating from the law in individual cases is the Huizense Veearts-judgement. In this case, a veterinarian brought healthy cows into contact with cows that were infected with foot-and-mouth disease, which was a criminal offence according to the then applicable law. However, the veterinarian applied this (scientifically accepted) method intentionally in order to protect the animals from getting seriously ill in the future. This possibility was apparently not anticipated when the law was made. The Supreme Court eventually judged that it would be unjust to punish the veterinarian, as his actions were in line with the underlying goal of the law, even though the law explicitly prohibited these actions (Supreme Court, 20 Feb 1933).

Computers would most likely ‘blindly’ apply the law, without looking at the principles the law is based on. Usually, this would lead to the right outcome. However, in examples like this, it would lead to unacceptable outcomes. This is because it is difficult for lawmakers to predict what situations are going to occur in the future. Moreover, computer systems that base their outcome on statistical analysis might not be able to cope with outliers: cases that are atypical (Custers, 2016, p. 15–16). An important function of a judge is applying law in different individual cases, and to a certain extent even creating new law (Bovend’Eert, 2011, p. 32–40). Heavy reliance on computers could mean that outliers will unjustly be treated as the average case and legal development will come to a halt.

3. Unwanted Biases

3.1 Removing unwanted biases

Merely applying the law consistently does not mean that discrimination is not possible. A system where a form of discrimination is embedded in the law would still be certain, as long as it is not applied randomly (Stanford Encyclopedia of Philosophy, 2016, para. 5.3). In the Netherlands, discrimination is prohibited by law, according to art. 1 of the Constitution, especially when it concerns factors like gender, race and religion. Accordingly, these factors should not play a role in legal decision-making, unless there is a specific law that allows it.

Computer automation could certainly lead to more consistent judgements, even when they contain unwanted biases. However, from the point of view of legal certainty, differences in judgements should be based on judicially relevant differences between the cases. Differentiating between groups without a good reason is arbitrary, and thus uncertain (Stanford Encyclopedia of Philosophy, 2015, para. 4.1). One important reason to use computers instead of human judges is to abolish unwanted biases. For example, according to multiple studies in the US, people of certain races get higher sentences in criminal cases and are more likely to get the death penalty (C. O’Neil, 2016, p. 29). In the Netherlands, the existence of a bias has been disputed regarding race. Research has shown that differences between sentencing in racial groups can be largely attributed to judicially relevant circumstances (De Rechtspraak, 2016). Still, an imminent danger in any justice system is that judges let factors that should be irrelevant play a role in their decision-making process (albeit unconsciously). Where humans are susceptible to all kinds of biases, computers are in principle neutral. They make predictable decisions on the basis of pre-programmed algorithms. Accordingly, if a variable such as race should not play a role in the decision-making process, it can simply not be fed to the system. The system will then be blind to this factor (This assumes that there are also no variables that indicate what the race of the person is in an indirect manner, such as differentiating between certain neighbourhoods or family background. However, Harcourt mentions that in the US, from around the 1970s, it was not politically correct anymore to use race as a factor in risk analysis, and yet the system can still be discriminatory (Harcourt, 2010, p.4)). When using big data, there will be predetermined factors that the system uses to make calculations. As long as it is clear what factors the system is using, this would be an effective way of safeguarding the justice system against unwanted biases.

3.2 Hidden biases

Even though computers might be a solution to combat unwanted biases, it depends heavily on how the system is designed. The computer system will be designed by humans and uses existing data, which could mean that it just copies existing biases while the system seems to be neutral.

Even if the algorithms themselves are not biased, the inserted data could lead to unwanted outcomes. If judges in the status quo discriminate, the data that the system uses will be biased. O’Neil describes that in the US risk models are used throughout the country to (among other things) improve consistency and combat biases of individual judges. The models are used to determine the expected recidivism rate of a convict. These particular models used information relating to the crime that was committed, as well as factors relating to the perpetrator. Using characteristics of a person can be a discriminatory practice. Even though the system seems unbiased because it makes consistent decisions based on earlier information, it is actually discriminatory (O’Neil, 2016, p.29–30).

Calders & Žliobaitė points out that merely removing a specific discriminatory variable does not necessarily solve the problem, as there might be other variables that correlate with it and thus the result will be the same. There are however ways to eliminate discrimination by correcting the data. Another way that problems may arise is by using incorrect or incomplete data. If a selective part of data is missing, this will lead to undesirable outcomes (Calders & Žliobaitė, 2013, para. 3.4.1–3.4.2).

3.3 Detecting biases

Even though computerized decisions will not automatically be less biased, big data can be used to make biases visible that would otherwise remain hidden (Pedreschi, Ruggieri & Turini, 2013, p. 100–101). Still, it is extremely difficult to detect biases by researching existing case law. Since no case will be exactly the same, much can always be attributed to the specific facts in the case. Even if a computer program is just as biased as a regular judge, the computer seems easier to test than a human being. A possible option is to make the system judge a fake case, especially designed to check if it is unjustifiably biased against a certain group of people. For example, there could be a test where two identical criminal cases are judged, except in one case the suspect is a woman and in the other a man. If the system is biased towards a certain gender, one of the suspects is more likely to be found guilty, or will get a higher sentence than the other. It would be much harder to find this out about human judges. First of all, humans will most likely know that they are being tested, and will (consciously or unconsciously) adjust their behavior accordingly. Secondly, as mentioned in the previous paragraph, every judge could make a slightly different decision, so a high number of participants is needed to conclude something about the whole population. Different from a judge, a computer system can be directly adjusted if the outcomes are not desirable. The model that is used, can simply be changed.

4. Transparancy

4.1 Pre-programmed transparency

An important factor in legal certainty is the knowledge that citizens have of the law (and thus the most likely outcome of a court case). Although judicial decisions do contain an explanation as to why a certain outcome was chosen, the explanations can be somewhat vague. This is apparent when different interests are weighed. Although there will be an overview of the different interests, it is in most cases unclear what role they play exactly in the final decision.

Legal Realists theorized that in difficult cases judges will first determine the outcome, and subsequently a reasoning will be constructed that fits. Realist Jerome Frank, among others, theorized that the outcome of a case depends largely on the judge’s personality and its interaction with the characteristics of the litigants and facts of the case (Marmor, 2012, chapter 9). If this is indeed the case, the reasoning of a judge is to some extent irrelevant. If a computer were to be used for decision making, it would be very clear that a ‘reasoning’ would precede the final decision. The outcome would always be based on certain objective circumstances of the case.

Depending on what type of computer program is used, this could be a very transparent way of making decisions. In theory, all the input and how the system processes it could be made public. Thus, the decision making process would be a lot more transparent than in the status quo.

4.2 Secrecy surrounding algorithms

Even though using computer systems is a transparent way of making decisions in theory, in practice it might not be transparent at all. Using a computer system might be just as untransparent as a human judge. The difference being that currently an individual judge is responsible for the decision. When using an advanced computer (support) system, it might be unclear who is responsible when a mistake is made. Furthermore, people might not trust the system if they do not know how it comes to a decision. A solution would be to inform people about how the algorithm works. The Council of State has said in a judgement from 2017 that (in administrative law) the government has the obligation to publicise the method of decision making and the data and assumptions it is based on, in order to protect the legal position of citizens (Council of State, 17 May 2017, para. 14.4).

The algorithms that are used are generally created by third parties. The exact working of algorithms is often a corporate secret, and is not available to the people that are being judged by it. Keeping it a secret can have the advantage that people will not be able to ‘game the system’ (O’Neil, 2016, p. 17, 29, 32). This would be a danger since systems sometimes rely on data that the subjects are directly generating, like questionnaires (See for example: [link]). Another issue surrounding transparency is that the computer systems can be very complicated, and might only be understandable to the people directly involved in creating them (O’Neil, 2016, p. 14).

An example that shows that lack of transparency can be problematic involves e-Court. E-Court is a company which offers a private alternative for judicial procedure to settle disputes between consumers and certain large companies. All procedures are in principle online and it seems the service uses algorithms to make decisions (Aanhangsel handelingen II 2017/18, 1803, answer 9, on: https://zoek.officielebekendmakingen.nl/ah-tk-20172018-1803.html). The service has been under fire for its lack of transparency. It was unclear who the ‘judges’ were, what previous decisions had been made and how the decisions were being made (Because of the criticism, e-Court recently started publishing decisions and names of their ‘judges’). Also, the lack of information that is provided to consumers about their rights and the procedure. Because of this, there are concerns about the legal position of consumers in this type of dispute settling (Kuijpers, 2018).

4.3 The importance of motivation

An important part, if not the most important part, of a judgement is the motivation that builds up to the final decision. Motivating a decision is required by art. 121 of the Constitution. This can range from merely citing the applicable law the verdict is based on, to an extensive reasoning that further explains how a rule should be applied. This will depend on the specific case. If a rule is clear and is supposed to be implemented without any exception, this would not need much motivation. A computer could be used to make that decision without any human interference, as it merely has to reference the applicable law. If a law is more vague, it will most likely need a more extensive motivation, which would be difficult for a computer program. Even if a human judge still has the final say, motivation could be an issue. The computer might judge on a specific aspect of the case, after which the judge weighs this with all the other factors. If the judge (partly) follows the decision of the computer, it would be difficult to explain why exactly, except that the computer said that this was the right response based on the algorithm used. Decisions, although they may be very certain, will appear to be random if people do not know how the decision is being made in their individual situation.

5. Conclusion

The main question of this thesis is: “From the perspective of legal certainty, to what extent should computers make legal decisions in the Dutch legal system?”.

Legal certainty is an important value of the Dutch legal system. There is always a degree of uncertainty, which can be a good thing in individual cases. However, it is generally preferable that judges do not diverge from the law when making their decisions. A first aspect of legal certainty is consistency. Even though systems of appeal and cassation do make for more unity in the application of the law, there can still be differences in lower courts as not all cases go into appeal, and the Supreme Court might judge two similar cases differently. Using computers to assist decision making would certainly make the application of the law more consistent. On the other hand, it also might slow down legal development and might lead to unjust outcomes in specific cases.

The existence of biases is a more practical issue that can harm legal certainty. Depending on the type of computer system that is used, it could counter discrimination, since biases of individual judges will be less of an issue. Though it is necessary to be careful when designing the system: if the data is biased, it can lead to unwanted outcomes. Analyzing existing data could be helpful in detecting biases, and thus in countering them.

Another important aspect of legal certainty is transparency. Citizens have to know the law and how it is applied. When a human judge makes a decision, there is more than one way to come to this decision. Although the judgement will contain a motivation, it is quite possible that other factors outside of the motivation also played a role. In this regard, the human decision making process is very untransparent. A well-designed computer system could provide a high degree of transparency, since the decision making process is laid down in algorithms that can theoretically be made available to the public. A difficulty for computers is motivating the judgements. Even if human judges stay involved, it will most likely be difficult for them to explain (or even understand) how the algorithm came to a decision, as these algorithms can be very complicated.

There are definitely some promising possibilities of using computers to make legal decisions from the perspective of legal certainty. However, some caution is needed, as the data can be biased, and the importance of motivation of a judgement should not be underestimated. Furthermore, some areas of the law are more difficult for a computer to grasp than others. For basic laws that do not need much explaining, using a computer could be an outcome. Open norms that have yet to be interpreted and laws that need to adapt quickly through time will need a more advanced computer program to apply them. Currently, human involvement is essential in these complicated areas of the law, though computers are helpful assistants in any type of case, even if they are merely used to find the applicable case law.

6. Discussion

It is important to note that — even though from a point of view of legal certainty computer involvement could definitely be beneficial — there are many other aspects to the issue. For example, there can be financial reasons to use computers in the decision making process. However, it is essential that the most important values of the legal system are not compromised just for the sake of efficiency.

In order to implement technology in the judicial system, there needs to be a strong collaboration between the legal profession and developers of the algorithms. When the judicial system starts using computer programs as an aid for sentencing, it is essential that judges know what the outcome means and thus how they can use it in their judgement. Also, it is important that the systems are designed in a way that meets the requirements of the legal system, for example it should not be unjustifiably discriminatory (even if this might be efficient).

Furthermore, it would be interesting to research to what extent the lawmaking process can adapt to technological advancements. If laws could be made in a way that is easier to implement in a computer system, it would also be less complicated to make decisions on the basis of those laws with the help of computers.

Even though it is highly unlikely that computers will completely take over the judicial system anytime soon, it is interesting to look at the opportunities of advanced technologies in law, even if they are just used as an aid. In order to implement this type of decision support effectively, it is not only necessary to explore what the best ways are for a judge to use the technology in their cases, but also to prepare future legal professionals for these technical advancements.

Sources

Aletras, Tsarapatsanis, Preoţiuc-Pietro & Lampos 2016 — N. Aletras, D. Tsarapatsanis, D. Preoţiuc-Pietro & V. Lampos, ‘Predicting judicial decisions of the European Court of Human Rights: a Natural Language Processing perspective’, PeerJ Comput. Sci. 2:e93 2016.

Angwin, Larson, Mattu & Kirchner 2016 — J. Angwin, J. Larson, S. Mattu & L. Kirchner, ‘Machine Bias: There’s software used across the country to predict future criminals. And it’s biased against blacks.’, ProPublica May 2016, https://www.propublica.org/article/machine-bias-risk-assessments-in-criminal-sentencing

Bix 2013 — B.H. Bix, ‘Radbruch’s Formula, Conceptual Analysis, and the Rule of Law’ in: Flores I., Himma K. (eds) Law, Liberty, and the Rule of Law, Ius Gentium: Comparative Perspectives on Law and Justice, vol. 18, Dordrecht: Springer 2013.

Bovend’Eert 2011 — P. Bovend’Eert, ‘Dient de rechtsvormende taak van de Hoge Raad versterkt te worden bij de herziening van het cassatiestelsel?’, in: A. Nieuwenhuis e.a. (red.), Rechterlijk activisme. Opstellen aangeboden aan prof. mr. J.A. Peters, Nijmegen: Ars Aequi Libri 2011, p. 31–41.

Calders & Custers 2013 — T. Calders & B. Custers, ‘What Is Data Mining and How Does It Work?’ in: B. Custers, T. Calders, B. Schermer, T. Zarsky (eds), Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases, chapter 2, Berlin: Springer 2013.

Calders & Žliobaitė 2013 — T. Calders & I. Žliobaitė, ‘Why Unbiased Computational Processes Can Lead to Discriminative Decision Procedures’ in: B. Custers, T. Calders, B. Schermer, T. Zarsky (eds), Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases, chapter 2, Berlin: Springer 2013.

Custers 2016 — B.H.M. Custers, ‘Big Data in wetenschappelijk onderzoek’, JV 2016/01, p. 8–21.

Dworkin 1986 — R. Dworkin, Law’s Empire, Cambridge, Mass.: Harvard University Press 1986.

Equivant 2017 — Equivant, ‘Practitioner’s Guide to COMPAS Core’, december 2017.

Harcourt 2010 — B. E. Harcourt, ‘Risk as a Proxy for Race’, Law & Economics Working Papers no. 535, 2010.

Heemskerk & Teuben 2017 — W. Heemskerk & K. Teuben, Kort begrip van KEI, Dordrecht: Convoy 2017.

Kuijpers 2018 — K. Kuijpers, ‘E-Court zwicht voor kritiek’, Investico 2018, https://www.platform-investico.nl/artikel/e-court-zwicht-voor-kritiek/

Locke 1689 — J. Locke, 1689, Two Treatises of Government, P. Laslett (ed.), Cambridge: Cambridge University Press, 1988.

Marmor 2012 — A. Marmor, The Routledge Companions to Philosophy of Law, chapter 9, Abingdon: Routledge Handbooks Online 2012.

Noortwijk & De Mulder 2018 — K. Noortwijk & R. De Mulder, ‘Computerrecht, Het rijpingsproces van juridische technologie’, Computerrecht 2018/50.

Van Ommeren 2003 — F.J. van Ommeren, ‘De rechtsstaat als toetsingskader’, in: F.J. van Ommeren & S.E. Zijlstra (red.), De rechtsstaat als toetsingskader, Den Haag: Boom Juridische uitgevers 2003, p. 7–22.

Openbaar Ministerie, accessed 11 June 2018 — Openbaar Ministerie, ‘BOS-Schade’, https://www.om.nl/onderwerpen/slachtoffers/bos-schade/bos-schade-0/

O’Neil 2016 — C. O’Neil, Weapons of Math Destruction: How Big Data Increases Inequality and Threatens Democracy, New York: Crown Publishing Group 2016.

Pedreschi, Ruggieri & Turini 2013 — D. Pedreschi, S. Ruggieri & F. Turini, ‘The Discovery of Discrimination’ in: B. Custers, T. Calders, B. Schermer, T. Zarsky (eds), Discrimination and Privacy in the Information Society: Data Mining and Profiling in Large Databases, chapter 2, Berlin: Springer 2013.

Radbruch (1932) 1950 — G. Radbruch, II. LEGAL PHILOSOPHY, In: The Legal Philosophies of Lask, Radbruch, and Dabin, 43–224. Cambridge: Harvard University Press 1950.

De Rechtspraak 2016 — De Rechtspraak, ‘Anders, maar even zwaar gestraft’, 2016, https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Raad-voor-de-rechtspraak/Nieuws/Paginas/Anders-maar-even-zwaar-gestraft.aspx

De Rechtspraak 2017a — De Rechtspraak, ‘Verder bouwen aan het huis van de rechtsstaat: scenarioplanning rechtspraak 2030’, Raad voor de rechtspraak 2017.

De Rechtspraak 2017b — De Rechtspraak, ‘Dag van de Rechtspraak: computer vs rechter’, 2017, https://www.rechtspraak.nl/Organisatie-en-contact/Organisatie/Raad-voor-de-rechtspraak/Nieuws/Paginas/Dag-van-de-Rechtspraak-computer-vs-rechter.aspx

Rietbroek 2017 — J. Rietbroek, ‘De grote advocatuur en nieuwe tech innovaties’, advocatiemagazine 2017, http://advocatiemagazine.sdu.nl/september-2017#!/advocatuur-en-legal-tech

Scheltema 1989 — M. Scheltema, ‘De rechtsstaat’, in: J.W.M. Engels e.a. (red.), De rechtsstaat herdacht, Zwolle: W.E.J. Tjeenk Willink 1989.

Soeteman 2010 — A. Soeteman, Over recht gesproken: een inleiding tot het recht, chapter 4 and 10, Nijmegen: Ars Aequi Libri 2010.

Stanford Encyclopedia of Philosophy 2015 — Stanford Encyclopedia of Philosophy, ‘Discrimination’, Aug 2015, https://plato.stanford.edu/entries/discrimination

Stanford Encyclopedia of Philosophy 2016 — Stanford Encyclopedia of Philosophy, ‘The Rule of Law’, Jun 2016, https://plato.stanford.edu/entries/rule-of-law

Vermeulen & Fenwick 2017 — E.P.M. Vermeulen & M. Fenwick, ‘De technologische revolutie en de toekomst van het recht’, TOP 2017(5) 378.

Case Law:

Council of State 17 May 2017Council of State, 2017, ECLI:NL:RVS:2017:1259.

Supreme Court 20 Feb 1933 — Supreme Court, 20 Feb 1933, NJ 1933/918 (Veearts).

All online sources were accessed on 11 June 2018.

The top banner image is by FlitsArt. All other images are from pixabay.com.

--

--