The world’s leading publication for data science, AI, and ML professionals.

Robust Decision Making in the Era of Machine Learning

How we ensure ML avoids pitfalls in human decision-making

Fairness and Bias

(Coauthored with Joseph Morris@LLNL.)

One of the most significant technological advances in the last decade is the maturation and wide adoption of machine learning (ML)[1]. Many tasks that used to be difficult to fulfill by traditional algorithm-centric programming suddenly came within reach of ML. At the core of most of such tasks is decision-making based on information fed to the decision-maker. For instance, determining which chess/go piece to move is decision making; steering or braking a car is decision making; diagnosing cancer from CT images is decision making; evaluating how much wastewater or CO2 we can inject into subsurface without causing earthquakes is also decision making. The decision-maker used to be either a human being or a group of human beings; now it can be an artificial intelligence (AI) using different combinations of ML and traditional algorithmic programming.

(Photo licensed from iStock)
(Photo licensed from iStock)

How to maximize the benefits of machine learning to our society is not just a computer science question. AIs can get progressively smarter, but history has taught us that smart entities can make stupid, unethical, or even disastrous decisions. As soon as people got a sneak peek into the potential of ML–based or ML–augmented decision-making, related ethical and legal implications became important topics of discussion [2,3]. A curious observation is that most stakeholders do not seem to recognize that the study of decision-making itself has been a fruitful research area in psychology and Cognitive Science. Placing ML in the framework of cognitive science enables us to understand its power and limitations, and thereby empowers us to utilize ML in smarter and safer ways.

The understanding of two cognitive processes, named System1 and System 2 in Kahneman’s book Thinking Fast and Slow[4], can reveal much of AI’s roles in decision making. System 1 is the brain’s automatic and largely unconscious thinking mode. With adequate training, System 1 "executes skilled responses and generates skilled intuitions" [4]. System 1 is responsible for most of our day-to-day decisions. It requires little energy or attention but is often bias-prone. System 2 is slow, effortful, and dominated by analytical reasoning. It requires effort and attention to think through all the choices. Most of the time System 2 endorses coherent, or at least seemingly coherent stories generated by System 1. When System 1 struggles, System 2 is invoked to make more effortful reasoning.

A simple, illuminating fact is that most ML methods emulate the operation of the human System 1. The training of convolutional neural network is a close analogue to training the human System 1, as vividly evidenced by the intuitive term "training". ML establishes "empirical" associations through training. When given application scenarios resembling the training scenarios, ML yields results in a fast, seemingly effortless way. If the application scenarios were not covered by the training material or the training was not adequate, current ML approaches struggle. This analogue between ML and human System 1 provides many useful insights based on learnings from the science of human decision-making.

A strong System 2 needs to work with ML in tandem. In human decision-making, when the lazy System 2 fails to intervene because it is fooled by an apparently coherent picture created by a System 1, we tend to make mistakes. It is therefore foolish to expect ML can be left alone to make all the decisions. If ML is to be used in Decision Making, the ability to detect difficult or dangerous situations and then to trigger intervention by a System 2 should be mandatory. It is useful to mention an example of consciously including a System 2 component in ML-based decision making: Current autopilot systems in cars require the human driver to keep hands on the steering wheel and be prepared to intervene. An interesting fact in this example is that the System 2 here is likely a combination of the driver’s System 1 and System 2, but as a whole, it fills the role of System 2 in this decision-making (i.e., driving) process.

Diversities lead to robust decisions. In human society, important decisions are usually made through rigorous debates. When a radiologist reads CT images, he/she first relies on his/her System 1. The judgement is usually fast and does not require great mental effort, but such System 1 only acquired this ability through many years of training. When the radiologist’s System 1 detects something odd, System 2 is invoked to investigate. Very often, the opinions of a group of doctors with different specialties are consulted. Here diversity is the key to reaching a robust diagnosis. When ML replaces radiologists, we can have multiple AIs, which could be based on different mathematical principles and trained with different data, to do the job together. Fortunately, employing multiple AIs will be less costly as having a team of medical doctors to check each other’s work. In this way, diversity is introduced in what used to be handled by a single human System 1.

Visualization is more important than ever. Historically, visualization has been a major component of algorithm-driven decision-making workflows. Human System 1 heavily relies on visual inputs; hence Kahneman coined the expression "what you see is all there is"(4). As ML can directly "see" the bits, is visualization still even necessary? Following the above discussion, we see that humans cannot be completely out of the decision processes. Human reviewers, inspectors, and expert panels will still be indispensable parts of the "System 2" in decision making. Therefore, given that visualization is the most direct way to connect with humans, the quality of visualization directly determines how effective System 2 is in preventing mistakes and biases; effective and compelling visualization becomes more important than ever to allow humans to provide timely input to the decision-making process.

A profound finding from the study of decision-making in psychology is that the majority of human decision-making is not rational even though we strongly believe we are rational beings. It is important to recognize that using ML to partly replace human System 1 is not going to make decision-making more rational. Although on the surface people tend to believe machines are more rational than humans, ML-based AIs have all the faults of human System 1. This won’t change regardless of how advanced the AIs become. Fundamentally, they work in the same way. Acknowledging limitations is the first step to avoiding pitfalls. A better future lies in a robust understanding of how decisions are made, how errors are generated, ML’s limitations, and how to efficiently include humans in the decision process.

(This article is supported by US DOE’s SMART initiative.)

Reference:

  1. M.I. Jordan, T.M. Mitchell, Machine Learning: Trends, Perspectives, and prospects. Science 349:6345, 255–260. DOI: 10.1126/science.aaa8415 (2015).
  2. C. Coglianese, D. Lehr, "Regulating by robot: Administrative decision making in the machine-learning era." Faculty Scholarship at Penn Law, 1734. https://scholarship.law.upenn.edu/faculty_scholarship/1734. (2017)
  3. E. Vayena, A. Blasimme A, I.G. Cohen,. Machine learning in medicine: Addressing ethical challenges. PLoS Med. 15(11):e1002689. DOI:10.1371/journal.pmed.1002689.
  4. D. Kahneman, Thinking, fast and slow (Farrar, Straus and Giroux; 1st edition 2011).

Related Articles