Topic: Decision Analysis
In this topic, we examine tools that help managers make good decisions. These tools will
not be based on linear programming.
In all of our LP-based approaches, we always assumed that the all problem data (objective
function values, constraint coefficients, right-hand-side values) were known with certainty.
For instance, when we were deciding how many bowls and mugs to make, we assumed that
we knew exactly how much they sold for, how much clay and labor were available, etc. In
many real-life situations, uncertainty would exist about some of these values. There could
be a high-demand scenario and a low-demand scenario, for instance, and we need to decide
what to make before we know which scenario will occur. In this topic, we will discuss
techniques to help make decisions under uncertain conditions. The first set of tools are for
situations where we do not have any information on how likely particular scenarios are.
Following that, we will discuss making decisions when we have estimates of how likely the
different scenarios are.
Components of Decision Making
When making decisions, there are two different components.
Decisions: Actions that we can control, but we need to make them before the future
is known with certainty
States of Nature: Future scenarios that we cannot control
For example, suppose that you need to stock coffee at a concession stand in November, and
the weather is uncertain. If the weather is cold, the demand for coffee will be high. If the
weather is warm, the demand for coffee will be low.
Decision:
States of Nature:
To help make decisions in situations like this, we organize the results into payoff tables.
These help us visualize what results will occur in each scenario, so that we can make the
best decision. A generic payoff table is shown below. They always contain one row for
each decision and one column for each possible state of nature.
States of nature our
State of Nature beyond our control. We
Decisions are Decisions
State A State B don’t know which state
under our control. Decision 1 Payoff 1A Payoff 1B will happen at the time we
Decision 2 Payoff 2A Payoff 2B make our decision.
In general, the payoffs can be profits (which we would want to maximize) or costs (which
we would like to minimize).
1
Decision Making under conditions of uncertainty
Decision Making WITHOUT Probabilities
In this section, we assume that we have no information about how likely each state of
nature is. We will know which states of nature are possible and what payoff occurs for
each state and each decision. We need to make the “best” decision now, before the state of
nature is revealed. However, there are several different ways to define “best.”
To illustrate, suppose that you must decide between investing in an apartment building, an
office building, or a warehouse. We have the following payoff table that summarizes the
profit each decision achieves for each state of nature (or scenario).
State of Nature
Decisions
Good Economic Conditions Poor Economic Conditions
Apartment Building $50,000 $30,000
Office Building $100,000 -$40,000
Warehouse $30,000 $10,000
If you knew for certain that good economic conditions were coming, what investment
decision would you make?
If you knew for certain that poor economic conditions were coming, what investment
decision would you make?
A decision is called dominant if it is the best decision under all states of nature. Do we
have a dominant decision in this example?
Since there is no dominant decision, we need to somehow determine which decision is best
in some sense. There are several different proposed methods to measure what is best. The
one you select depends on how your personal view of what is most important.
2
Optimistic Criterion
In the optimistic criterion, we compute the payoff for each decision in its best-case
scenario. Then, we select the decision that is the best among the best-case payoffs. If we
are dealing with profits, this is referred to as the maximax criterion, since we want the
maximum of the maximum profits. If we are dealing with costs, it is referred to as the
minimin criterion, since we want the minimum of the minimum costs.
We can illustrate the optimistic criterion on our investment problem as follows.
Economic Conditions
Decisions Best-Case Payoff
Good Poor
Apartment Building $50,000 $30,000
Office Building $100,000 -$40,000
Warehouse $30,000 $10,000
Best of the Best-Case Payoffs
Best Decision Using the Optimistic Criterion:
Can you see any drawbacks to using this criterion to make the decision?
Pessimistic Criterion
In the pessimistic criterion, we compute the payoff for each decision in its worst-case
scenario. Then, we select the decision that is the best among the worst-case payoffs. If we
are dealing with profits, this is referred to as the maximin criterion, since we want the
maximum of the worst-case profits. If we are dealing with costs, it is referred to as the
minimax criterion, since we want the minimum of the worst-case (max) costs.
We can illustrate the pessimistic criterion on our investment problem as follows.
Economic Conditions Worst-Case Payoff
Decisions
Good Poor (maximin)
Apartment Building $50,000 $30,000
Office Building $100,000 -$40,000
Warehouse $30,000 $10,000
Best of the Worst-Case Payoffs
Best Decision Using the Pessimistic Criterion:
3
Note that the pessimistic criterion is relatively conservative, since it only considers worst-
case scenarios. It tries to avoid any decisions that could have very bad consequences, but
this means that it often shuns decisions with big payoffs (since big rewards also usually
have big risks).
Minimax Regret Criterion
The minimax regret criterion seeks to minimize the regret you would feel if you picked a
decision that turned out to be bad for the scenario (state of nature) that actually occurred.
To illustrate, suppose that that you had purchased the warehouse, and afterward, the
economic conditions turned out to be good. How much regret would you feel?
Payoff for Your Decision:
Payoff for Best-Possible Decision for Good Economy:
Regret =
In this criterion, we want to minimize the maximum regret you could feel. We use the
following steps.
1. For each scenario, compute the best-possible payoff over all decisions.
2. For decision 𝑖 and scenario 𝑗, Regret is taken as the difference between (Payoff for
Decision i in Scenario j) and (Best Possible Payoff for Scenario j).
3. Find the maximum regret for each decision.
4. Select the decision the minimizes the maximum regret.
We can illustrate the pessimistic criterion on our investment problem as follows. This table
is sometimes referred to as a regret table or opportunity loss table.
Economic Conditions Max Regret
Decisions Good Poor for this
Payoff Regret Payoff Regret Decision
Apartment
$50,000 $30,000
Building
Office
$100,000 -$40,000
Building
Warehouse
$30,000 $10,000
Best Payoff
for this
Scenario
Step 1: Determine the best payoff for each scenario.
Step 2:Decision
Best CalculateUsing
the regret you would
the Minimax feel Criterion:
Regret for each decision, once the state of nature occurred.
Step 3: Calculate the maximum regret you’d feel for each decision.
Step 4: Then select the decision that minimizes this value.
4
Equal Likelihood Criterion (Laplace criterion)
The equal likelihood criterion (also called the Laplace criterion) determines the
average payoff for each decision if the scenarios are assumed to be equally likely. It
computes the payoff for each decision as the average payoff across all scenarios, and it
selects the decision that has the best average payoff.
Economy
Decisions Average Payoff
Good Poor
Apartments $50,000 $30,000
Office $100,000 -$40,000
Warehouse $30,000 $10,000
Best Decision Using the Equal Likelihood Criterion:
Hurwicz Criterion
The Hurwicz criterion seeks a compromise between the optimistic and pessimistic
criteria. It was developed by Leonid Hurwicz in 1951. The idea is to take a weighted
combination of the optimistic and pessimistic values, since most people are not totally
optimistic, nor totally pessimistic. It uses a coefficient of optimism (denoted by 𝛼 with
0 ≤ 𝛼 ≤ 1) that reflects the proportion of weight to put on the optimistic criterion (i.e.
measure of the decision maker’s optimism).
Note that (1 − 𝛼) is referred to as the coefficient of pessimism and is the weight placed on
the pessimistic criterion (and is also between 0 and 1).
For each decision, the Hurwicz criterion computes
𝛼 ∗ (Best-Case Payoff) + (1 − 𝛼) ∗ (Worst-Case Payoff)
The decision with the best value is selected. (For profit problems, we want the maximum
value, and for costs, we want the minimum value.)
Note: When 𝛼 = 1, Hurwicz is the same as the optimistic criterion. When 𝛼 = 0, it
is the same as the pessimistic criterion.
5
We can illustrate the Hurwicz criterion on our investment problem as follows.
We will illustrate for 𝛼 = 0.4. (This means the decision-maker is 40% optimistic and 60%
pessimistic.)
Economy Best- Worst-
𝛼 ∗ (Best-Case) +
Decisions Case Case
Good Poor (1 − 𝛼) ∗ (Worst-Case)
Payoff Payoff
Apartments $50,000 $30,000
0.4(100,000)
Office $100,000 -$40,000
+ 0.6(−40,000) = 16,000
0.4(30,000)
Warehouse $30,000 $10,000
+ 0.6(10,000) = 18,000
Best Decision Using the Hurwicz Criterion (𝛼 = 0.4):
Note that in this example, there are only two scenarios (good economy and bad economy),
so the best-case payoff is always from the good economic conditions, while the worst-case
payoff is always from poor economic conditions. This might not always be the case; there
could be three or more scenarios that could occur, and the best-case scenarios could vary
for the different decisions.
Ex1:
The following are the payoff table for three strategies and two states of nature:Select a
strategy using each of the following decision criteria: (a) Maximax, (b) Minimax regret, (c)
Maximin, (d) Laplace Criteria, assuming equiprobable states e) Hurwicz criterion (𝛼 = 0.3).
1)
Strategy State of Nature
N1 N2
S1 50 60
S2 10 -15
S3 -40 110
6
2)
Strategy State of Nature
N1 N2
S1 400 1200
S2 300 800
S3 1000 1850
Ex 2:
Ex 3:
7
Ex 4:
8
Decision Making under conditions of risk
Decision Making WITH Probabilities
Previously, we assumed that we knew all of the scenarios (or states of nature) that could
occur, but we had no information on how likely they were. In this section, we assume that
we do have an estimate for how likely each scenario is.
Expected Value Criterion (EMV)
In the equal likelihood criterion, we implicitly assumed that the scenarios were all equally
likely and computed their expected value as a straight average. We now extend that
concept to cases where the scenarios are not equally likely. We will again compute the
expected value of the payoff for each decision, but we will calculate weighted averages,
using the probabilities that we are given for each scenario.
Expected Payoff for Decision 𝑖
= (Payoff of Decision 𝑖 under Scenario 𝑗) ∗ (Probability of Scenario 𝑗)
To illustrate, we return to the investment problem, where we are now told that there is a
60% chance that the economy will be good (and a 40% chance that it will be poor).
Economy
Decisions Good Poor Expected Payoff
(Prob = 0.6) (Prob = 0.4)
Apartments $50,000 $30,000
Office $100,000 -$40,000 100,000(0.6) + (−40,000)(0.4) = 44,000
Warehouse $30,000 $10,000 30,000(0.6) + 10,000(0.4) = 22,000
Best Decision Using the Expected Value Criterion:
CAUTION: You need to be careful in how you interpret the expected payoff. We found the
expected payoff for the apartment to be $42,000. Does that mean that the apartment will
give us a payoff of $42,000 if we purchase it?
What does the expected apartment payoff of $42,000 mean?
If we repeated this decision every year, some years we would make $50,000.
Other years we would make $30,000.
Over many years, we would average a return of $42,000.
However, in no year will you make exactly $42,000.
9
Expected Opportunity Loss Criterion (EOL)
A criterion closely related to the expected value criterion is the expected opportunity loss
(EOL) criterion. Just as we did for the minimax regret criterion, we again compute the
regret associated with each possible decision under each possible scenario. Rather than
finding the maximum regret (or opportunity loss) over each decision, we instead compute
the expected regret. To do this, we will multiply the regret for each scenario by its
probability of occurring, and then sum all of the values.
We can illustrate this process for the investment example.
Economic Conditions
Expected Regret
Decisions Good (Prob = 0.6) Poor (Prob = 0.4)
for this Decision
Payoff Regret Payoff Regret
Apartment
$50,000 $30,000
Building
0(0.6) + 70,000(0.4)
Office Building $100,000 -$40,000
= 28,000
70,000(0.6) + 20,000(0.4)
Warehouse $30,000 $10,000
= 50,000
Best Payoff for
This Scenario
Step 1: Determine the best payoff for each scenario.
Step 2: Calculate the regret you would feel for each decision, once the state of nature occurred.
Step 3: Calculate the expected (or average) regret you’d feel for each decision.
Step 4: Then select the decision that minimizes this value.
Best Decision Using the Expected Opportunity Loss Criterion:
Important Note: The expected opportunity loss and the expected value criteria always yield
the same decision, so it is not necessary to compute them both.
10
Expected Value of Perfect Information (EVPI):
Sometimes, it is possible to purchase additional information regarding how likely future
events are to occur. Perhaps we could hire a psychic, or invest in better equipment to
monitor weather conditions, or maybe hire an outside firm to do market research. If we
are presented with this option, we want to make sure we don’t spend more money on
obtaining the information than it will yield. Toward this end, we define the expected
value of perfect information (EVPI) to measure how much we could improve our average
payoff if we knew exactly which scenario was going to occur.
Consider our investment example. Again, assume that there is a 60% chance that the
economy is good and 40% chance that it is bad. If you knew for certain what the economy
would be (that is, had perfect information), what your decisions and payoffs be?
Economy
Decisions
Good (Prob = 0.6) Poor (Prob = 0.4)
Apartments $50,000 $30,000
Office $100,000 -$40,000
Warehouse $30,000 $10,000
Best Decision
Payoff
Since there is a 60% chance that you would make $100,000 and a 40% chance that you
would make $30,000, the expected payoff with perfect information (EPPI) would be
0.6 ∗ $100,000 + 0.4 ∗ $30,000 = $72,000. (Average profit if we had perfect information.)
In general, the expected payoff with perfect information (EPPI) is given by
EPPI = (Best Payoff under Scenario 𝑗) ∗ (Probability of Scenario 𝑗)
In order to calculate the value of having perfect information, we need to compare this to the
expected payoff if we did not have perfect information. If we didn’t know which scenario
were going to happen, we would choose the decision that was best in terms of its expected
value. Recall that we had the following expected payoffs, where
Expected Payoff for Decision 𝑖
= (Payoff of Decision 𝑖 under Scenario 𝑗) ∗ (Probability of Scenario 𝑗)
11
Economy
Decisions Good Poor Expected Payoff
(Prob = 0.6) (Prob = 0.4)
Apartments $50,000 $30,000 0.6($50,000) + 0.4($30,000) = $42,000
Office $100,000 -$40,000 0.6($100,000) + 0.4($ − 40,000) = $44,000
Warehouse $30,000 $10,000 0.6($30,000) + 0.4($10,000) = $22,000
Given these numbers, we would invest in the office, which gives an expected payoff of
$44,000. What does this mean? If we did not have perfect info, we’d go with what does best
on average (the office). If we repeated this many times, we’d make $44,000 on average.
This is obviously a lower expected profit than if we had perfect information.
The expected value of perfect information (EVPI) is given by the difference of these
two values.
𝐸𝑉𝑃𝐼 = 𝐸𝑃𝑃𝐼 − (Best Expected Payoff over All Possible Decisions)
In our case, we had 𝐸𝑃𝑃𝐼 = $72,000 and the best expected payoff (from the office decision)
of $44,000. Therefore, the value of having perfect information is given as follows.
𝐸𝑉𝑃𝐼 =EPPI – EMV =
NOTE: EVPI can also be found another way. EVPI is always equal to the expected
opportunity loss (EOL) or expected regret for the best decision. We can verify this by
looking back at the EOL calculation for the office decision.
𝐸𝑂𝐿 = 0.6 ∗ $0 + 0.4 ∗ $70,000 = $28,000
You can calculate it either way – choose whichever you find easiest.
12
Ex: A Cost Example
In the above example, we were interested in maximizing profit. We now complete another
example, in which we will do similar calculations for the case where we want to minimize
cost. Consider an electronics manufacturer that want to add a new supplier for some of its
components. It has some concerns about losses that could occur due to a supplier
shutdown from natural disasters. The following table summarizes the losses (in millions of
dollars) for an extended supplier shutdown, given different levels of event severity.
Event Severity
Supplier Country These are losses
Low Moderate Normal
Taiwan 8 11 21 under different
India 6 7 14 scenarios, so
Thailand 3 12 17 smaller numbers
Philippines 5 9 15 are better.
Determine the best decision using each of the following criteria. Remember that these
values are losses.
Event Severity Worst- Equal Likelihood
Best-Case
Decisions Case (Laplace criterion)
(Minimin)
Low Moderate Normal (minimax) Average payoff
(8 + 11 + 21)/3 = 10
Taiwan 8 11 21 8 21
India 6 7 14 6 14 (6 + 7 + 14)/3 = 9
Thailand 3 12 17 3 17 (3 + 12 + 17)/3 = 10.67
Philippines 5 9 15 5 15 (5 + 9 + 15)/3 = 9.67
Best Decision Using Optimistic Criterion: Thailand
Best Decision Using Pessimistic Criterion: India
Best Decision Using Equal Likelihood Criterion: India
Note: Taiwan is dominated by India, so you could eliminate Taiwan from consideration.
The Philippines is not dominated by India, nor by Thailand.
13
We can also calculate the minimax regret criterion.
Event Severity
Max Regret for
Decisions Low Moderate Normal
this Decision
Payoff Regret Payoff Regret Payoff Regret
4 7
Taiwan 8 5 11 21 7
0 14
India 6 3 7 0 3
5
Thailand 3 0 12 17 3 5
2
Philippines 5 2 9 15 1 2
Best Payoff for 7 14
3
This Scenario
Step 1: Determine the best payoff for each scenario.
Step 2: Calculate the regret you would feel for each decision, once the state of nature occurred.
Step 3: Calculate the expected (or average) regret you’d feel for each decision.
Step 4: Then select the decision that minimizes this value.
Best Decision Using Minimax Regret Criterion:
Suppose that we are given the following information about how likely events are in each
country. Determine the best decision using the expected value criterion. NOTE: This
example varies slightly from our previous example, since it allows different scenario
probabilities for each decision.
Event Severity Probability of Event Severity
Decisions
Low Moderate Normal Low Moderate Normal
Taiwan 8 11 21 0.43 0.45 0.12
India 6 7 14 0.56 0.33 0.11
Thailand 3 12 17 0.37 0.41 0.22
Philippines 5 9 15 0.47 0.46 0.07
Expected Loss for Taiwan:
Expected Loss for India:
Expected Loss for Thailand: 3(0.37) + 12(0.41) + 17(0.22) = 9.77
Expected Loss for Philippines: 5(0.47) + 9(0.46) + 15(0.07) = 7.54
Best Decision Using Expected Value Criterion: India
14
EVPI for a Cost Example
We now illustrate the EVPI concept for a cost example. Since the previous example had the
complication of different scenario probabilities for different decisions, it is more
complicated to calculate EVPI for that example. Instead, we switch to a different example
where we seek the decision that minimizes cost given three different oil price scenarios.
Oil Prices
City
Decrease Same Increase These are costs
Shanghai 2.7 3.9 6.3 under different
Mumbai 2.1 3.8 6.5 scenarios, so
Manila 1.7 4.3 6.1 smaller numbers
Santos 3.5 4.5 5.7 are better.
Veracruz 4.1 5.1 5.4
We are interested in calculating the expected value of perfect information. We first need to
calculate the expected cost if we know which scenario will happen ahead of time. If you
knew for certain what the oil prices would be (that is, had perfect information), what your
decisions and payoffs be?
Oil Prices
Decisions Decrease Same Increase
(p = 0.09) (p = 0.27) (p = 0.64)
Shanghai 2.7 3.9 6.3
Mumbai 2.1 3.8 6.5
Manila 1.7 4.3 6.1
Santos 3.5 4.5 5.7
Veracruz 4.1 5.1 5.4
Best Decision
3.8
Payoff 1.7 5.4
Therefore, the Expected Payoff with Perfect Information (EPPI) is given as follows.
EPPI = ∑ (Best Payoff Under Scenario 𝑖) ∗ (Probability of Scenario 𝑖)
= 1.7*0.09+3.8*0.27+5.4*0.64=4.635
15
To find EVPI, we also need to know the decision with the best expected payoff (cost).
Oil Prices
Decisions Decrease Same Increase Expected Payoff
(p = 0.09) (p = 0.27) (p = 0.64)
2.7(0.09) + 3.9(0.27) + 6.3(0.64)
Shanghai 2.7 3.9 6.3
= 5.328
2.1(0.09) + 3.8(0.27) + 6.5(0.64)
Mumbai 2.1 3.8 6.5
= 5.375
1.7(0.09) + 4.3(0.27) + 6.1(0.64)
Manila 1.7 4.3 6.1
= 5.218
3.5(0.09) + 4.5(0.27) + 5.7(0.64)
Santos 3.5 4.5 5.7
= 5.178
4.1(0.09) + 5.1(0.27) + 5.4(0.64)
Veracruz 4.1 5.1 5.4
= 5.202
Best Decision Using Expected Payoff:
Best Expected Payoff = 5.178
Therefore, we can calculate the expected value of perfect information.
𝐸𝑉𝑃𝐼 = (Best Expected Payoff over All Possible Decisions) − 𝐸𝑃𝑃𝐼
=5.178 – 4.635= 0.543
We can verify that this is the same value as we would obtain for the expected regret.
Oil Prices
Decrease Same Increase Expected Regret
Decisions
(p = 0.09) (p = 0.27) (p = 0.64) for this Decision
Payoff Regret Payoff Regret Payoff Regret
1(0.09) + 0.1(0.27)
Shanghai 2.7 1 3.9 0.1 6.3 0.9
+0.9(0.64) = 0.693
0.4(0.09) + 0(0.27)
Mumbai 2.1 0.4 3.8 0 6.5 1.1
+1.1(0.64) = 0.74
0(0.09) + 0.5(0.27)
Manila 1.7 0 4.3 0.5 6.1 0.7
+0.7(0.64) = 0.583
1.8(0.09) + 0.7(0.27)
Santos 3.5 1.8 4.5 0.7 5.7 0.3
+0.3(0.64) = 0.543
2.4(0.09) + 1.3(0.27)
Veracruz 4.1 2.4 5.1 1.3 5.4 0
+0(0.64) = 0.567
Best Payoff for
1.7 3.8 5.4
This Scenario
Best Decision Using Expected Regret:
Best Expected Regret = EVPI
16
Ex:
A Company manufactures parts for passenger cars and sells them in lots of 10,000 parts each.
The company has a policy of inspecting each lot before it is actually shipped to the retailer.
Five inspection categories, established for quality control, represent the percentage of
defective items contained in each lot. These are given in the following table. The daily
inspection chart for past 100 inspections show the following rating or breakdown
inspection:
Due to this the management is considering two possible courses of action:
(i) S1: Shut down the entire plant operations and thoroughly inspect each machine.
(ii) S2: Continue production as it now exists but offer the customer a refund for defective
items that are discovered and subsequently returned.
Sr.No. Rating Proportion of Defective items Frequency
1 Excellent (A) 0.02 25
2 Good (B) 0.05 30
3 Acceptable (C) 0.10 20
4 Fair (D) 0.15 20
5 Poor (E) 0.20 5
Total = 100
The first alternative will cost Rs 600 while the second alternative will cost the company Re
1 for each defective item that is returned. What is the optimum decision for the company?
Find the EVPI.
Ans:
17
Ex:
A toy manufacturer is considering a project of manufacturing a dancing doll with three
different movement designs. The doll will be sold at an average of Rs 10. The first movement
design using ‘gears and levels’ will provide the lowest tooling and set up cost of Rs 1,00,000
and Rs 5 per unit of variable cost. A second design with spring action will have a fixed cost of
Rs 1,60,000 and variable cost of Rs 4 per unit. Yet another design with weights and pulleys
will have a fixed cost of Rs 3,00,000 and variable cost Rs 3 per unit. The demand events that
can occur for the doll and the probability of their occurrence is given below:
Sr No Demand Type Demand (in Units) Probability
1 Light 25,000 0.10
2 Moderate 1,00,000 0.70
3 Heavy 1,50,000 0.30
(a) Construct a payoff table for the above project.
(b) Which is the optimum design?
(c) How much can the decision-maker afford to pay in order to obtain perfect information
about the demand?
Ans:
Payoff = (Demand × Selling price) – (Fixed cost + Demand × Variable cost)
= Revenue – Total variable cost – Fixed cost
The calculations for EMV are shown in Table
Since EMV is largest for spring action, it is the one that must be selected.
The maximum amount of money that the decision-maker would be willing to pay in order
to obtain perfect information regarding demand for the doll will be
EVPI = Expected payoff with perfect information – Expected payoff under uncertainty
EVPI=EPPI-EMV
= 4,60,500 – 4,55,000 = Rs 5,500.
18