The world’s leading publication for data science, AI, and ML professionals.

Make Good Decisions

My daughter, who's in middle school, recently bought her first new phone. She used her hard-earned money from a lucrative summer of…

Hands-on Tutorials

My daughter, who’s in middle school, recently bought her first new phone. She used her hard-earned money from a lucrative summer of neighborhood jobs (dog-walking, cat-sitting, plants-watering, babysitting). On the day of picking up her phone, she had a decision to make. Should she buy insurance to protect against accidental damage, or reduce the likelihood of damage by buying a phone-case, or do nothing and take her chances?

Photo by Ali Abdul Rahman on Unsplash
Photo by Ali Abdul Rahman on Unsplash

How should she make that decision? Not fearfully, nor blithely, but rather, rationally.

Decide Rationally¹

There are 3 possible decisions:

  1. Buy insurance (and no phone-case). Let’s say that the insurance premium costs $100 for a year, and if she needs a repair/replacement she has to pay a deductible of $50.
  2. Buy a phone-case (and no insurance). Suppose that the simplest case that appeals to her costs $40.
  3. Risk it without any insurance or phone-case.

There are 2 possible outcomes for each decision:

  1. The phone survives the year without any significant damage.
  2. The phone breaks within the year, requiring repair/replacement. With insurance this would cost her the deductible. Without insurance she’d be out about say $200 to get back a working phone. A phone-case would lower the risk of the phone breaking in the first place.

The enumerated decisions and outcomes can be visualized as a logical tree. Each path from the root to a leaf represents one of the several possibilities transpiring.

Decision tree (image by author)
Decision tree (image by author)

Does she have enough information to make a rational decision? Not quite. A critical missing piece is the Probability of each possibility:

  1. The probability of breaking the phone within a year, without a case.
  2. The probability of breaking the phone within a year, with a case.

If this was a game of dice or cards or coin-tosses, she would know the probabilities objectively and accurately. But real life is messy. There’s uncertainty in the underlying probabilities. Does that doom a rational decision? No.

She starts with some reasonable subjective estimates of the probabilities. Given that she’s a self-professed klutz, she thinks there’s a significant chance that she could end up breaking the phone, say about 50%. And a phone-case would cut that down by about half (so 25%). Then, she calculates the expected value of each decision:

expected value = V + v_1 x p_1 + v_2 x p_2 + ... + v_n x p_n
where V is the value of the decision, and v_i, p_i are the expected value and probability respectively of the ith possible outcome.
So,
╔═══════════════════════════╦══════════════════════════════════════╗
║       Decision            ║            Expected value            ║
╠═══════════════════════════╬══════════════════════════════════════╣
║ Insurance: yes, Case: no  ║ -100 +  -50 x 0.50 + 0 x 0.50 = -125 ║
║ Insurance: no,  Case: yes ║  -40 + -200 x 0.25 + 0 x 0.75 =  -90 ║
║ Insurance: no,  Case: no  ║    0 + -200 x 0.50 + 0 x 0.50 = -100 ║
╚═══════════════════════════╩══════════════════════════════════════╝
Decision tree with expected values and probabilities (image by author)
Decision tree with expected values and probabilities (image by author)

What does the expected value mean? Think of it in terms of the scenario for a decision playing out repeatedly. So, over and over again she buys a phone and lives with it for one year and sees the outcome of her decision. Then, in some of the repetitions the phone will break (e.g. about half the repetitions when her decision was to not buy a case) and in others it won’t. Her average gain/loss over all the repetitions is the expected value of the decision. So, she makes the rational decision to minimize her expected cost (or, equivalently, maximize expected value) by declining insurance and buying a protective case (since -90 > -125 and -90 > -100).

You object that in real life she doesn’t get to play out this scenario repeatedly. You’re right. But she does get to play out many, many different scenarios and the statistics play out over all those scenarios. So, if she consistently makes decisions that maximize her expected value, she will do better than if she made decisions irrationally. See What’s Luck Got To Do With It for a more detailed discussion of repeated events versus a large number of distinct events.

There’s still something else bothering you. You suspect that her decision was based on a faulty estimate of the probabilities, and a different estimate would lead to a different rational decision. Again, you’re right. The way to deal with the uncertainty in underlying probabilities is to find the thresholds at which the optimal decision changes and then decide which side of the threshold you’re most likely to be. With the specific values for premium, deductible, case, and repair, it turns out that as long as the probability of breaking the phone without a case is < 40%, it’s better for her to take her chances without insurance or case. If the likelihood is higher than 40%, then a case makes the most sense. Buying insurance was never the best option.

Expected value for each decision as a function of the probability of breaking the phone (image by author)
Expected value for each decision as a function of the probability of breaking the phone (image by author)

My daughter leaned on the side of caution, estimated that her chance of breaking the phone was greater than 40%, and decided to buy a case for her phone. She’s been happy with her decision so far.

Good Decisions Can Have Bad Outcomes

What if my daughter had broken her phone just a few days after declining insurance? Then, instead of simply incurring $150 for the insurance premium and deductible, she would be out $240 for the case and the replacement phone. Surely she must rue her decision then, no? No! Or at least she shouldn’t. A good decision can have a bad outcome (due to bad luck). Inversely a bad decision can have a good outcome (due to good luck). Despite the overwhelming inclination to do so, never evaluate the quality of a decision based on the quality of the outcome. If you make consistently good decisions, you will have better outcomes more of the time (compared to if you make consistently bad decisions or make good decisions inconsistently).

Quailty of decision vs quality of outcome (image by author)
Quailty of decision vs quality of outcome (image by author)

Make Exceptions For Exceptional Stakes

The decision-making rule of maximizing expected value works well for most cases. But, not so when the stakes are exceptionally high. Sometimes you have to forego an expected gain because it would be difficult to realize it. And, sometimes you have to incur an expected loss to protect yourself from catastrophes.

Sometimes Forego An Expected Gain

Suppose you’re offered the chance to play a lottery with favorable odds. You have 1 in a million chance of winning a billion dollars. The cost of playing is $100. You can play as many times as you’d like.

Photo by dylan nolte on Unsplash
Photo by dylan nolte on Unsplash

The expected value of playing is $900:

expected value = -$100 + $1,000,000,000 x 1/1,000,000 = $900
Decision tree for the lottery (image by author)
Decision tree for the lottery (image by author)

So you should play, right? And if you lose, you should keep playing until you win, right? No, unless you’re willing to first lose hundreds of millions of dollars. The odds of winning, and thereby realizing the expected gain, are so low that most of the time you’re going to lose. On average you’ll have to play a million times before you win, costing you $100,000,000. Sure, you’ll make up for that after a win, but do you have that much capital and time to invest?

So, to revise our decision-making heuristic, decide on the path that maximizes expected value, except when the overwhelmingly likely scenario is a loss. In that case, prefer another decision, if available, in which a loss is not extremely likely. Don’t play the lottery.

Sometimes Incur An Expected Loss

My family and I live in an earthquake-prone region where a big one is due within the next few decades. We’ve grappled with the decision of whether to buy earthquake insurance to cover our home. Suppose the insurance premium is $1,000 a year with a deductible of $100,000 in case of requiring the insurance to cover a total loss of our home. Without insurance, suppose that the replacement/repair would cost $1,000,000.

Photo by Jose Antonio Gallego Vázquez on Unsplash
Photo by Jose Antonio Gallego Vázquez on Unsplash

If the likelihood of a catastrophic earthquake within the next year is low (lower than about 1 in a 1,000), then our expected cost is lowest if we don’t buy insurance.

Decision tree for earthquake insurance (image by author)
Decision tree for earthquake insurance (image by author)

So we should decline covering our home against total loss from an earthquake, right? No, unless we’re willing to become homeless, with a small but nevertheless non-negligible likelihood. The worst-case scenario is catastrophic enough to warrant a relatively small cost to protect against it.

Tweaking our decision-making heuristic again, decide on the path that maximizes expected value, except when the overwhelmingly likely scenario is a loss, or the worst-case scenario is catastrophic. In the case of a catastrophic worst-case, prefer another decision, if available, in which the worst-case is more palatable. Don’t risk what’s important.

Use Tools

Armed with heuristics on how to make rational decisions, how should you actually go about it without getting all bogged down in calculations every time you’re faced with a decision? Using appropriate tools, of course. If you’re a spreadsheet ninja or a programmer, you probably feel comfortable analyzing the expected value, likely case, worst case, and probability thresholds for each decision you’re faced with. For the rest of you, I’m happy to share a rough prototype of what I’ve been using: https://vishesh-khemani.github.io/decisions/decision.html. The screenshots below will give you an idea of what it does.

Screenshot of the input spec for a decision tree (image by author)
Screenshot of the input spec for a decision tree (image by author)
Screenshot of the rendered decision tree (image by author)
Screenshot of the rendered decision tree (image by author)
Screenshot of auto-generated parameters that can be tweaked to find thresholds (image by author)
Screenshot of auto-generated parameters that can be tweaked to find thresholds (image by author)
Screenshot of "candlestick" values (min, expected, 90th percentile, max) for each decision, for an at-a-glance view of the relevant metrics (image by author)
Screenshot of "candlestick" values (min, expected, 90th percentile, max) for each decision, for an at-a-glance view of the relevant metrics (image by author)

Summary

  1. Make decisions rationally by usually choosing the option that maximizes your expected value.
  2. If there’s too much uncertainty in the probability of each scenario, determine the threshold probabilites at which the optimal decision changes and judge which side of the thresholds is most likely.
  3. If you consistently make good decisions, you’ll have many well-deserved good outcomes and a few unlucky bad ones. Never judge the quality of one decision solely on the quality of its outcome.
  4. Don’t play the lottery: reject the decision with the largest expected value if its most likely outcome is a loss (unless there are no better alternatives).
  5. Don’t risk what’s important: reject the decision with the largest expected value if its worst case outcome is catastrophic (unless there are no better alternatives).
  6. Use tools to help evaluate decisions. One such tool is my rough prototype at https://vishesh-khemani.github.io/decisions/decision.html.

References

  1. Maxims For Thinking Analytically – Dan Levy

Related Articles