Game Theory and Strategic Behaviour
Lecture 3
Strictly Dominated Strategy
0 00
Let si and si be feasible strategies, i.e. they belong to the strategy space
0 00
of player i. Strategy si is strictly dominated by strategy si if for each
feasible combination of the other players’ strategies, i’s payoff from
0 00
playing si is strictly less than his payoff from playing si :
0 00
ui (s1 , ..., si−1 , si , si+1 , ..., sn ) < ui (s1 , ..., si−1 , si , si+1 , ..., sn )
for each (s1 , ..., si−1 , si+1 , ..., sn ) that can be constructed from the other
players’ strategy spaces S1 , S2 , ..., Sn .
Strictly Dominated Strategy: An Example
Player 2
M N O
X 1, 0 1, 2 2, 1
Player 1 Y 0, 3 0, 1 1, 0
Here Y is a strictly dominated strategy for Player 1 because
u1 (Y, M ) = 0 < u1 (X, M ) = 1
u1 (Y, N ) = 0 < u1 (X, N ) = 1
u1 (Y, O) = 1 < u1 (X, O) = 2
Solution Concept - Iterated Elimination of
Strictly Dominated Strategies
in this game, make the assumptions about rationality clear in the beginning itself
and assumption about static game of complete information
Player 2
Left Middle Right
Up 1, 0 1, 2 0, 1
Player 1 Down 0, 3 0, 1 2, 0
For player 1, neither Up or Down is strictly dominated: Up is
better if player 2 plays Left or Middle but Down is better if player 2
plays Right.
For player 2, Right is strictly dominated by Middle because 2 > 1
and 1 > 0.
If player 2 is rational, he will not play Right.
Solution Concept - Iterated Elimination of
Strictly Dominated Strategies
If player 1 knows that player 2 is rational, player 1 can assume that
player 2 will not play Right, i.e. player 1 can play the game as if it
were as shown below.
Player 2
Left Middle
Up 1, 0 1, 2
Player 1
Down 0, 3 0, 1
Down is now strictly dominated by Up for Player 1 (as 1 > 0), so
if Player 1 is rational and Player 1 knows that player 2 is rational
then he will not play Down.
Solution Concept - Iterated Elimination of
Strictly Dominated Strategies
Now, if player 2 knows that player 1 is rational and player 2 knows
that player 1 knows that player 2 is rational, then player 2 can
assume that player 1 will never play Down, leaving the game as
shown below.
Player 2
Left Middle
Player 1 Up 1, 0 1, 2
Left is now strictly dominated by Middle for player 2.
{Up, Middle} is the outcome of the game (Player 1’s strategy is
always written first, followed by Player 2’s strategy).
Solution Concept - Iterated Elimination of
Strictly Dominated Strategies
This solution concept has two disadvantages:
First, each step of elimination requires an assumption about what
players know about each other’s rationality.
need to assume it is common knowledge that players are rational.
Common knowledge - need to assume that players are rational, that
all players know that all players are rational, that all players know
that all players know that all players are rational, and so on.
Second, this process of iterated elimination often produces imprecise
predictions about the play of the game.
Solution Concept - Iterated Elimination of
Strictly Dominated Strategies
Player 2
L C R
T 0, 4 4, 0 5, 3
Player 1 M 4, 0 0, 4 5, 3
B 3, 5 3, 5 6, 6
This game has no dominated strategies for either player.
Hence, one cannot predict the play of the game using this solution
concept.
Hence, we move on to study Nash Equilibria - a solution concept
that provides tighter predictions in a broad class of games.
As we will see below, this game has a Nash Equilibrium {B, R}.
Solution Concept - Nash Equilibrium
If game theory is to provide accurate and unique predictions of the
play of a game, then the solution must be a Nash equilibrium :
For the prediction to be true, each player should be willing to play
the strategy predicted. Player 2 should be willing to play B and Player 2
should be willing to play R
The predicted strategy for each player should be his best response
to the strategies played by other players, i.e. yielding the highest
payoff, else the player has an incentive to change his strategy.
Thus, for the prediction to hold, no player should have an incentive
to deviate from his predicted strategy - it is strategically stable or
self enforcing.
This is the concept of a Nash Equilibrium!
Solution Concept - Nash Equilibrium
Solve for the Nash Equilibrium in a 2-player game by following the
method below:
For each strategy of a given player, say player i, identify the best
response for player j.
Similarly, for each strategy of player j, identify the best response for
player i.
The Nash equilibrium is a strategy pair such that each strategy is a
best response to the other in the pair.
Solution Concept - Nash Equilibrium
Player 2
L C R
T 0, 4 4, 0 5, 3
Player 1 M 4, 0 0, 4 5, 3
B 3, 5 3, 5 6, 6
In the game above, strategies M, T and B are player 1’s best
responses to strategies L, C and R of player 2.
Similarly, strategies L, C and R are the best responses of player 2 to
strategies T, M and B played by player 1.
Thus, strategies B and R are best responses to each other and
constitute a Nash equilibrium.
Solution Concept - Nash Equilibrium
n-1 means that you’re hplding 3 out
of 3 players strategies fixed when
star = equilibirum you are looking at player 3
More generally, in the n-player normal form game, the strategies
(s∗i , ..., s∗n ) are a Nash equilibrium, if for each player i, s∗i is at least tied
for player i’s best response to the strategies specified for the n-1 other
players, (s∗i , ..., s∗i−1 , s∗i+1 , ..., s∗n ): Player 1’s best response is at players.
least tied with the payoffs of the other
Eg if best response = 5, then all other player’s payoff should be
ui (s∗1 , ..., s∗i−1 , s∗i , s∗i+1 , ..., s∗n ) ≥ ui (s∗1 , ..., s∗i−1 , si , s∗i+1 , ..., s∗n )
atleast as high as 5
for every feasible strategy si in Si ; that is s∗i solves
maxsi ∈Si ui (s∗1 , ..., s∗i−1 , si , s∗i+1 , ..., s∗n )
0 0
Conversely, if the strategies (s1 , ..., sn ) are not a Nash equilibrium, then
it must be that at least one player, say player j, can be better off by
deviating to a different strategy in response to the strategies,
0 0 0 0
(s1 , ..., sj−1 , sj+1 , ..., sn ) played by the other players.
References
Gibbons R., A Primer in Game Theory, Chapter 1.