3 ways risk management methods can be misleading and how to fix it!

Common mistakes that can make your risk management effort completely useless, if not counterproductive.

Walid
Towards Data Science

--

Photo by Espen Bierud

To maximize the chances of success of any endeavor, it is critical to properly identify and prioritize the risks to focus your energy and your attention on treating those most important.

In this article, I will share with you three common mistakes that can make your risk management effort completely useless, if not counterproductive.

First of all, let’s recall some definitions; the PMBoK® (Project Management Body of Knowledge) guide describes risk as; an uncertain event or condition that if it occurs, has a positive or negative effect on a project’s objective.

So, to manage properly the risks you have identified in your project, you need to evaluate their probability and their impact.

Most risk management methods rely on simple scales to assess the risk’s probability or impact.

For example, the PMBoK guide recommends using definition tables as below to classify the probabilities and impacts in a well-defined scale ranging from “Very High” to “Very Low.”

PMBOK Guide 6th Edition 2017 — Project Management Institute, Table 11–1, Page 407

Error #1: Re-use existing scale definitions

In many situations, the scale definitions are standardized and provided by your organization. You also may be tempted to re-use scale definitions from a previous project.

To illustrate how dangerous such practices could be, let’s take an example.

If you were told to prioritize two risks, one which may add a 17% delay to your project and a second one which can multiply by a factor of 2 your total budget. Which one would you consider more important?

The answer to this question can depend on the nature of your project, however, most people would agree that losing 100% is more important than losing 17%.

Now, let’s take the previous scale definition table provided in the PMBOK and apply it to a 500K$ project which is supposed to last two years.

Suppose you have to prioritize two risks with the same probability, one that could delay your project by 17% (4 months) and a second risk that could make you lose 100% of your budget (500K$).

According to the definition table, the first risk (17% delay) will be rated as “High priority” while the second one (100% overbudget) will be rated as “Medium”. This is just the opposite of the intuitive conclusion we have made before!

As a take away, before using any scale’s definitions, even if it is provided by your organization, you should always make sure the different scales are aligned, coherent, and well suited to your project’s context.

Error #2: Scale’s compression bias

Once you have evaluated the probability and impact levels using the well-defined scales, conventional risk management methods recommend associating a score to each scale level. Probabilities and impacts scores are then multiplied to calculate the risk’s criticality.

The higher this criticality is, the more the risk is considered as important and worth your attention to take action and reduce its probability or impact to an acceptable level.

In the example provided in the PMBoK guide, the proposed scores range from 0.9 for “very high” probabilities to 0.1 for the “very low” ones, and impact scores range from 0.8 to 0.05.

With such scales, two random events which are significantly different could be considered as equivalent. For example, two events with probabilities which are respectively equal to 0.71 and 0.99 will both have the same score of 0.9.

This scale compression phenomena can even reverse the order of priorities of your risks.
Consider for example two risks:

  • Risk 1: High probability (51% — 70%) and Medium impact (501k$ — 1M$)
  • Risk 2: Low probability (11% — 30%) and High impact (1M$ — 5M$)

Based on the probability-impact matrix above, the first risk has a higher criticality (0.14) than the second one (0.12). We expect it then to have a higher priority.

However, if we calculated the maximum expected impact which is at risk in both cases, we will find the opposite order of priorities!

Indeed, the worst case for the first risk is 700K$ (70%*1M$: 70% chance of losing 1M$) while for the second risk it is 1500 k$ (30% chance of losing 5M$).

If we plot all the possible values of budget impact vs the evolution of criticality, we can see a few other points where this function is decreasing. Which means that it reverses the natural order of priorities for the identified risks.

Budget at risk as a function of the risk’s criticality

We have just demonstrated that, due to a scale’s compression effect, risk management methods can lead to reversed risk prioritizations.

Error #3: Scale’s interpretation

The last type of error we make when dealing with risks is related to the way people (mis)interpret the probability scales.

David Budescu, Professor of Psychometrics and Quantitative Psychology at Fordham University, has conducted several studies where he questioned thousands of people from different nationalities about their understanding of sentences extracted from the Intergovernmental Panel on Climate Change (IPCC) report.

The IPCC is the United Nation’s body for assessing the science related to climate change.
The IPCC rely on probability scales (such as “unlikely” and “very likely”) to convey the underlying uncertainty of its forecasts.

David Budescu demonstrated that people systematically misinterpret the probabilistic statements even when they are provided with clear definitions of these scales (e.g., unlikely < 33%; very likely > 90%).

David Budescu’s teams succeeded in enhancing the percentage of interpretations which are aligned with the definition by changing the scales’ definition to match the probabilities that participants would have associated to the words “likely”, “very likely,” and others used in their daily life.

The percentage of consistent interpretation jumped thus from 26% to 40%. However, 40% of consistency in comprehending the scales is still very low.
Imagine you are in a meeting with your colleagues and have just agreed to categorize a specific risk as “very likely” while you are referring to the scale’s definition where “very likely” is associated to probabilities higher than 90%.
According to this study, you still could have 40% of your team that agrees with you to use the “very likely” level, while if they were asked to assign a probability to this same event they would say > 50%.

The only thing you would have succeeded in doing in this meeting was creating an illusion of communication and agreement within your team on what should be the most important risks to tackle.

Conclusions:

In this article, we have just shared 3 possible ways conventional risk management methods can be misleading and ambiguous.

I am not saying you should stop using these methods. However, it is important to be aware of these biases in order to make the most of such methods while limiting their possible drawbacks.
Below are few guidelines to increase your chance of success:

  • If you are using scales, make sure they are coherent and aligned with each other.
  • When converting your scales to scores, always use symmetrical scores, otherwise you will favor one type on impact over the others.
  • Always prefer using quantitative probabilities directly without relying on intermediate scales. It is the best way to have an un-ambiguous description of uncertainty.

References

Budescu, D.V., Por H., & Broomell, S. (2012). Effective communication of uncertainty in the IPCC reports. Climatic Change, 113, 181–200.

Budescu, D.V., Por, H., Broomell, S., & Smithson, M. (2014). The interpretation of IPCC probabilistic statements around the world. Nature Climate Change, 4,508–512. DOI:10.1038./NCLIMATE2194.

--

--