Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

bayesian_game_theory_notes

Chapter 1 discusses Bayesian games in normal form, focusing on situations where players have imperfect information about each other's preferences and strategies. It introduces key concepts such as Nash equilibrium in the context of Bayesian games, illustrated through examples like the 'Battle of the Sexes'. The chapter emphasizes the importance of players forming beliefs about others' actions and preferences when making strategic decisions.

Uploaded by

Navadhesh M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

bayesian_game_theory_notes

Chapter 1 discusses Bayesian games in normal form, focusing on situations where players have imperfect information about each other's preferences and strategies. It introduces key concepts such as Nash equilibrium in the context of Bayesian games, illustrated through examples like the 'Battle of the Sexes'. The chapter emphasizes the importance of players forming beliefs about others' actions and preferences when making strategic decisions.

Uploaded by

Navadhesh M
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 74

Uncertainty and Contracts Chapter 1.

Bayesian Games in Normal Form

Notes on

Chapter 1. Bayesian Games in Normal Form

UNCERTAINTY AND CONTRACTS


3rd Year

Degree in Economics
Double Bachelor’s Degree in Business and Economics
2022/2023

Iñaki Aguirre

Department of Economic Analysis

University of the Basque Country UPV/EHU

-1-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Chapter 1. Bayesian Games in Normal Form

Chapter 9, Bayesian Games (9.1, 9.2, 9.3 and 9.4) M. Osborne, An Introduction to Game Theory

Introduction.

1.1. Motivational examples.

1.2. General definitions.

1.2.1. Bayesian games.

1.2.2. Nash equilibrium.

1.3. Two examples concerning information.

1.3.1. More information may hurt.

1.3.2. Infection.

1.4. Illustration: Cournot’s duopoly game with imperfect information.

1.4.1. Imperfect information about cost.

1.4.2. Imperfect information on cost and information.

Introduction

In the subject Market Power and Strategy, we assume complete information. That is, each player

has to play in a game with perfect knowledge about her rival’s preferences and strategy spaces. In

this chapter, we relax this assumption.

Underlying the notion of Nash equilibrium is that each player holds the correct belief about the

other players’ actions. To do so, a player must know the game she is playing; in particular, she

must know the other players’ preferences. In many contexts, the agents are not perfectly informed

about their rivals’ characteristics: bargainers may not know each others’ valuations of the object of

-2-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

negotiation, firms may not know each others’ cost functions, a monopolistic firm may not know

consumers preferences, etcetera. In some situations, a participant may be well informed about her

opponents’ characteristics, but may not know how well these opponents are informed about her

own characteristics. In this chapter, we describe the model of a “Bayesian game”, which

generalizes the notion of a strategic game to allow us to analyze any situation in which each player

is imperfectly informed about an aspect of her environment that is relevant to her choice of an

action.

1.1. Motivational examples

We start with a couple of examples that serve to illustrate the main ideas in a Bayesian game. We

will define the notion of Nash equilibrium separately for each game. In the next section, we will

define the general model of a Bayesian game and the notion of Nash equilibrium for such a game.

Example 1: (Battle of the Sexes) Bach or Stravinsky?

Two people wish to go out together. Two concerts are available: one of music by Bach, and one of

music by Stravinsky. One person prefers Bach and the other prefers Stravinsky. If they go to

different concerts, each of them is equally unhappy listening to the music of either composer.

Player 1 (the one that prefers Bach) is the row player and player 2 (who prefers Stravinsky) is the

column player. The game in normal form is:

𝐵𝑎𝑐ℎ 𝑆𝑡𝑟𝑎𝑣𝑖𝑛𝑠𝑘𝑦

𝐵𝑎𝑐ℎ (2, 1) (0, 0)

𝑆𝑡𝑟𝑎𝑣𝑖𝑛𝑠𝑘𝑦 (0, 0) (1, 2)

-3-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

In this game, there are two Nash equilibria in pure strategies: (Bach, Bach) and (Stravinsky,

Stravinsky).

Example 2 (273.1): Bach or Stravinsky? Variant of BoS with imperfect information 1

Consider a variant of BoS in which player 1 is unsure whether player 2 prefers to go out with her or

prefers to avoid her, whereas player 2, as before, knows player 1’s preferences. Assume that player

1 thinks that with probability ½ player 2 wants to go out with her and with probability ½ player 2

wants to avoid her. That is, player 1 thinks that with probability ½ she is playing the game on the

left in the next figure and with probability ½ she is playing the game on the right.

1 1
2 2
1
2 2
𝐵 𝑆 𝐵 𝑆

𝐵 (2, 1) (0, 0) 𝐵 (2, 0) (0, 2)

𝑆 (0, 0) (1, 2) 𝑆 (0, 1) (1, 0)

𝑃. 2 𝑤𝑖𝑠ℎ𝑒𝑠 𝑡𝑜 𝑚𝑒𝑒𝑡 𝑃. 1 𝑃. 2 𝑤𝑖𝑠ℎ𝑒𝑠 𝑡𝑜 𝑎𝑣𝑜𝑖𝑑 𝑃. 1

We can think of there being two states (of Nature), one in which payoffs are given in the left

table and one in which payoffs are given in the right table. Player 2 knows the state (she

knows whether she wishes to meet or to avoid player 1) whereas player 1 does not know;

player 1 assigns probability ½ to each state.

-4-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

The notion of Nash equilibrium for a strategic game models a steady state in which each

player’s beliefs about the other players’ actions are correct, and each player acts optimally,

given her beliefs. We want to generalize this notion to the current situation.

From player 1’s point of view, player 2 has two possible types, one whose preferences are

given in the left table and one whose preferences are given in the right table. Player 1 does not

know player 2’s type, so to choose an action rationally she needs to form a belief about the

action of each type. Given these beliefs and her belief about the likelihood of each type, she

can calculate her expected payoff of each of her actions. We next calculate the expected

payoff of each one of player 1’s actions corresponding to each combination of actions of the

two types of player 2.

(B, B) (B, S) (S, B) (S, S)

B 2 1 1 0

S 0 1 1 1
2 2

For this situation, we define a pure strategy Nash equilibrium to be a triple of actions, one for

player 1 and one for each type of player 2, with the property that

- the action of player 1 is optimal, given the actions of the two types of player 2 (and player

1’s belief about the state).

- the action of each type of player 2 is optimal, given the action of player 1.

-5-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Now we obtain best responses of player 1 (against the possible actions of the two types of

player 2) and best responses of each type of player 2 (against the actions of player 1).

(B, B) (B, S) (S, B) (S, S)

B 2 1 1 0

S 0 1 1 1
2 2

𝐵 𝑆

𝐵 ( , 1) ( , 0)

𝑆 ( , 0) ( , 2)

𝑃. 2 𝑤𝑖𝑠ℎ𝑒𝑠 𝑡𝑜 𝑚𝑒𝑒𝑡 𝑃. 1

𝐵 𝑆

𝐵 ( , 0) ( , 2)

𝑆 ( , 1) ( , 0)

𝑃. 2 𝑤𝑖𝑠ℎ𝑒𝑠 𝑡𝑜 𝑎𝑣𝑜𝑖𝑑 𝑃. 1

It is easy to show that (B, (B, S)), where the first component is the action of player 1 and the

other component is the pair of actions of the two types of player 2, is a Nash equilibrium.

-6-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Given that the actions of the two types of player 2 are (B, S), player 1’s action B is optimal

(that is, it maximizes her expected payoff); given that player 1 chooses B, B is optimal for the

type who wishes to meet player 1 and S is optimal for the type who wishes to avoid player 1.

s1 BR2 s2 BR1

B (B, S) (B, B) B

S (S, B) (B, S) B

(S, B) B

(S, S) S

We can interpret the actions of the two types of player 2 to reflect player2’s intentions in the

hypothetical situation before she knows the state. We can tell the following story. Initially

player 2 does not know the state: she is informed of the state by a signal that depends on the

state. Before receiving this signal, she plans an action for each possible signal. After receiving

the signal, she carries out her planned action for hat signal. We can tell a similar story for

player 1. To be consistent with her not knowing the state when she takes an action, her signal

must be uninformative; it must be the same in each state. Given her signal, she is unsure of

the state; when choosing an action she takes into account her belief about the likelihood of

each state given her signal.

-7-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Example 2 (273.1): Bach or Stravinsky? Variant of BoS with imperfect information 2

Consider a variant of BoS in which neither player knows whether the other wants to go out with

her. Specifically, suppose that player 1 thinks that with probability ½ player 2 wants to go out with

her, and with probability ½ player 2 wants to avoid her, and player 2 thinks that with probability

2/3 player 1 wants to go out with her and with probability 1/3 player 1 wants to avoid her. As

before, assume that each player knows her own preferences.

We can model this situation by introducing 4 states, one for each possible configuration of

preferences. We refer to these states as:

- yy: each player wants to go out with the other.

- yn: player 1 wants to go out with player 2 but player 2 wants to avoid player 1.

- ny: player 1 wants to avoid player 2 and player 2 wants to go out with player 1.

- nn: both players want to avoid the other.

The fact that player 1 does not know player 2’s preferences means that she cannot distinguish

between states yy and yn, or between states ny and nn. Similarly, player 2 cannot distinguish

between states yy and ny, or between states yn and nn. We can model the players’ information by

assuming that each player receives a signal before choosing an action. Player 1 receives the same

signal, say 𝑦1 , in states yy and yn, and a different signal, say 𝑛1 , in states ny and nn; player 2

receives the same signal, say 𝑦2 , in states yy and ny, and a different signal, say 𝑛2 , in states yn and

nn. After player 1 receives the signal 𝑦1 , she is referred to as type 𝑦1 of player 1 (who wishes to go

out with player 2); after she receives the signal 𝑛1 , she is referred to as type 𝑛1 of player 1 (who

wishes to avoid player 2). In a similar way, player 2 has two types: 𝑦2 and 𝑛2 .

-8-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Type 𝑦1 of player 1 believes that the probability of each of the states yy and yn is ½; type 𝑛1 of

player 1 believes that the probability of each of the states ny and nn is ½. Type 𝑦2 of player 2

believes that the probability of state yy is 2/3 and that of state ny is 1/3; type 𝑛2 of player 2 believes

that the probability of state yn is 2/3 and that of state nn is 1/3. We can represent the game as:

1 1
2 2
1: 𝑦1

𝐵 𝑆 𝐵 𝑆

𝐵 (2, 1) (0, 0) 𝐵 (2, 0) (0, 2)

2
3 𝑆 (0, 0) (1, 2) 2 𝑆 (0, 1) (1, 0)
3

𝑆𝑡𝑎𝑡𝑒 𝑦𝑦 𝑆𝑡𝑎𝑡𝑒 𝑦𝑛

2: 𝑦2 2: 𝑛2
1 1
2 2

1: 𝑛1
𝐵 𝑆 𝐵 𝑆

1
𝐵 (0, 1) (2, 0) 𝐵 (0, 0) (2, 2)
1
3
3

𝑆 (1, 0) (0, 2) 𝑆 (1, 1) (0, 0)

𝑆𝑡𝑎𝑡𝑒 𝑛𝑦 𝑆𝑡𝑎𝑡𝑒 𝑛𝑛

-9-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

𝑦1 :
(B, B) (B, S) (S, B) (S, S)

B 2 1 1 0

S 0 1 1 1
2 2

𝑛1 :

(B, B) (B, S) (S, B) (S, S)

B 0 1 1 2

S 1 1 1 0
2 2

𝑦2 : 𝑛2 :

B S
B S
(B, B) 1 0
(B, B) 0 2

(B, S) 2 2 1 4
3 3 (B, S)
3 3

1 4
(S, B) 2 2
3 3 (S, B) 3 3

(S, S) 0 2 (S, S) 1 0

-10-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

s1 BR2 s2 BR1

(B, B) (B, S) (B, B) (B, S)

(B, S) (B, S) (B, S) (B, B)


(S, S)
(S, B) (B, B)
(S, B) (S, B)

(S, S) (S, S) (S, B)

(S, S) (S, B)

Nash equilibria: ((B, B), (B, S)) and ((S, B), (S, S)).

In each of these examples, a Nash equilibrium is a list of actions, one for each type of each

player, such that the action of each type of each player is a best response to the actions of all

the types of the other player, given the player’s beliefs about the state after she observes her

signal. We may define a Nash equilibrium in each example to be a Nash equilibrium of the

strategic game in which the set of players is the set of all types of all players in the original

situation.

In the next section, we define the general notion of a Bayesian game and the notion of Nash

equilibrium in such game.

-11-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

1.2. General definitions

1.2.1. Bayesian games

A strategic game with imperfect information is called a “Bayesian game”. As in a strategic

game, the decision-makers are called players, and each player is endowed with a set of

actions.

A key element in the specification of the imperfect information is the set of states. Each state

is a complete description of one collection of the players’ relevant characteristics, including

both their preferences and their information. For every collection of characteristics that some

player believes to be possible, there must be a state. For instance, consider the first example

of BoS and assume that player 2 wishes to meet player 1. In this case, the reason for including

in the model the state in which player 2 wishes to avoid player 1, is that player 1 believes such

a preference to be possible.

At the start of the game, a state is realized. The players do not observe this state. Rather, each

player receives a signal that may give her some information about the state. Denote the signal

player i receives in state w by 𝜏𝑖 (𝑤). The function 𝜏𝑖 is called player i’s signal function. If, for

example, 𝜏𝑖 (𝑤) is different for each value of w, then player i knows, given her signal, the

state that has occurred; after receiving her signal, she is perfectly informed about all the

players’ relevant characteristics. At the other extreme, if 𝜏𝑖 (𝑤) is the same for all states, then

player i’s signal conveys no information about the state. If 𝜏𝑖 (𝑤) is constant over some

subsets of the set of states, but is not the same for all states, then player i’s signal conveys

partial information. For example, if there are three states, 𝑤1 , 𝑤2 and 𝑤3 , and 𝜏𝑖 (𝑤1 ) ≠

-12-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

𝜏𝑖 (𝑤2 ) = 𝜏𝑖 (𝑤3 ), then when the state is 𝑤1 player i knows that it is 𝑤1, whereas when it is

either 𝑤2 or 𝑤3 she knows only that it is one of these two states.

We refer to player i in the event that she receives the signal 𝑡𝑖 as type 𝒕𝒊 of player i. Each type

of each player holds a belief about the likelihood of the states consistent with her signal. If,

for example, 𝑡𝑖 = 𝜏𝑖 (𝑤1 ) = 𝜏𝑖 (𝑤2 ), then type 𝑡𝑖 of player i assigns probabilities to 𝑤1 and 𝑤2 .

Each player may care about the actions chosen by the other players, as in a strategic game

with perfect information, and also about the state. The players may be uncertain about the

state, so we need to specify their preferences regarding probability distributions over pairs (a,

w) consisting of an action profile a and a state w. We assume that each player’s preferences

over such probability distributions are represented by the expected value of a Bernoulli payoff

function. We specify each player i’s preferences by giving a Bernoulli payoff function 𝑢𝑖 over

pairs (a, w).

In summary, a Bayesian game is defined as follows.

Definition 1. Bayesian game

A Bayesian game consists of

- a set of players,

- a set of states

and for each player

- a set of actions

-13-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

- a set of signals that she may receive and a signal function that associates a signal with each

state

- for each signal she may receive, a belief about the states consistent with the signal (a

probability distribution over the set of states with which the signal is associated)

- a Bernoulli payoff function over pairs (a, w), where a is an action profile and w is a state,

the expected value of which represents the player’s preferences among lotteries over the set of

such pairs.

Example 2 (273.1): Bach or Stravinsky? Variant of BoS with imperfect information 1


1 1
2 2
1
2 2
𝐵 𝑆 𝐵 𝑆

𝐵 (2, 1) (0, 0) 𝐵 (2, 0) (0, 2)

𝑆 (0, 0) (1, 2) 𝑆 (0, 1) (1, 0)

𝑃. 2 𝑤𝑖𝑠ℎ𝑒𝑠 𝑡𝑜 𝑚𝑒𝑒𝑡 𝑃. 1 𝑃. 2 𝑤𝑖𝑠ℎ𝑒𝑠 𝑡𝑜 𝑎𝑣𝑜𝑖𝑑 𝑃. 1

Players: 1 and 2

States: The set of states is {meet, avoid}

Actions: The set of actions of each player is {B, S}

Signals: Player 1 may receive a single signal, say z; her signal function 𝜏1 satisfies

𝜏1 (𝑚𝑒𝑒𝑡) = 𝜏1 (𝑎𝑣𝑜𝑖𝑑) = 𝑧 (one type of player 1). Player 2 receives one of two signals, say

-14-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

m and v; her signal function 𝜏2 satisfies 𝜏2 (𝑚𝑒𝑒𝑡) = 𝑚 and 𝜏2 (𝑎𝑣𝑜𝑖𝑑) = 𝑣. (two types of

player 2)

Beliefs: Player 1 assigns probability ½ to each state after receiving the signal z. Player 2

assigns probability 1 to the state meet after receiving the signal m, and probability 1 to the

state avoid after receiving the signal v.

Payoffs: The payoffs 𝑢𝑖 (𝑎, 𝑚𝑒𝑒𝑡) of each player i for all possible action pairs are given in the

left table and the payoffs 𝑢𝑖 (𝑎, 𝑎𝑣𝑜𝑖𝑑) are given in the right table.

Example 2 (273.1): Bach or Stravinsky? Variant of BoS with imperfect information 2


1 1
2 2
1: 𝑦1

𝐵 𝑆 𝐵 𝑆

𝐵 (2, 1) (0, 0) 𝐵 (2, 0) (0, 2)

2
3 𝑆 (0, 0) (1, 2) 2 𝑆 (0, 1) (1, 0)
3

𝑆𝑡𝑎𝑡𝑒 𝑦𝑦 𝑆𝑡𝑎𝑡𝑒 𝑦𝑛

2: 𝑦2 2: 𝑛2
1 1
2 2

1: 𝑛1
𝐵 𝑆 𝐵 𝑆

1
𝐵 (0, 1) (2, 0) 𝐵 (0, 0) (2, 2)
1
3
3

𝑆 (1, 0) (0, 2) 𝑆 (1, 1) (0, 0)


-15-
𝑆𝑡𝑎𝑡𝑒 𝑛𝑦 𝑆𝑡𝑎𝑡𝑒 𝑛𝑛
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Players: 1 and 2

States: The set of states is {yy, yn, ny, nn}

Actions: The set of actions of each player is {B, S}

Signals: Player 1 receives one of two signals, 𝑦1 and 𝑛1 ; her signal function 𝜏1 satisfies

𝜏1 (𝑦𝑦) = 𝜏1 (𝑦𝑛) = 𝑦1 and 𝜏1 (𝑛𝑦) = 𝜏1 (𝑛𝑛) = 𝑛1 (two types of player 1). Player 2 receives

one of two signals, say 𝑦2 and 𝑛2 ; her signal function 𝜏2 satisfies 𝜏2 (𝑦𝑦) = 𝜏2 (𝑛𝑦) = 𝑦2 and

𝜏2 (𝑦𝑛) = 𝜏2 (𝑛𝑛) = 𝑛2 (two types of player 2).

Beliefs: Player 1 assigns probability ½ to each of the states yy and yn after receiving the signal

𝑦1 and probability ½ to each of the states ny and nn after receiving the signal 𝑛1 . Player 2

assigns probability 2/3 to the state yy and probability 1/3 to the state ny after receiving the

signal 𝑦2 , and probability 2/3 to the state yn and probability 1/3 to the state nn after receiving

the signal 𝑛2 .

Payoffs: The payoffs 𝑢𝑖 (𝑎, 𝑤) of each player i for all possible action pairs and states are

given in previous figure.

1.2.2. Nash equilibrium

In a Bayesian game each type of each player chooses an action. In a Nash equilibrium of such

a game, the action chosen by each type of each player is optimal (that is, maximizes her

expected payoff), given the actions chosen by every type of every other player.

-16-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Example 3: Fighting an opponent of unknown strength

Two people are involved in a dispute. Person 1 does not know whether person 2 is strong or

weak; she assigns probability 𝛼 to person 2’s being strong. Person 2 is fully informed. Each

person can either fight or yield. Each person’s preferences are represented by the expected

value of a Bernoulli payoff function that assigns the payoff of 0 if she yields (regardless of the

other person’s action) and a payoff of 1 if she fights and her opponent yields; if both people

fight, then their payoff are (-1, 1) if person 2 is strong and (1, -1) if person 2 is weak.

Formulate this situation as a Bayesian game and find its Nash equilibria if 𝛼 < 12 and if 𝛼 > 12.
𝛼 1−𝛼
1
2 2
𝐹 𝑌 𝐹 𝑌

𝐹 (-1, 1) (1, 0) 𝐹 (1, -1) (1, 0)

𝑌 (0, 1) (0, 0) 𝑌 (0, 1) (0, 0)

𝑃. 2 𝑖𝑠 𝑠𝑡𝑟𝑜𝑛𝑔 𝑃. 2 𝑖𝑠 𝑤𝑒𝑎𝑘

Players: Person 1 and person 2

States: The set of states is { 𝑠𝑡𝑟𝑜𝑛𝑔, }

Actions: The set of actions of each player is {F, Y}

Signals: Person 1 receives one signal 𝑚 that is not informative; her signal function 𝜏1 satisfies

𝜏1 (𝑠𝑡𝑟𝑜𝑛𝑔) = 𝜏1 (𝑤𝑒𝑎𝑘) = 𝑚 (one type of player 1). Person 2 receives one of two signals,

-17-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

say 𝑠 and 𝑤; her signal function 𝜏2 satisfies 𝜏2 (𝑠𝑡𝑟𝑜𝑛𝑔) = 𝑠 and 𝜏2 (𝑤𝑒𝑎𝑘) = 𝑤 (two types

of player 2)

Beliefs: Player 1 assigns probability 𝛼 to state 𝑠𝑡𝑟𝑜𝑛𝑔 and 1 − 𝛼 to state 𝑤𝑒𝑎𝑘 after

receiving the signal 𝑚. Player 2 assigns probability 1 to state 𝑠𝑡𝑟𝑜𝑛𝑔 after receiving the

signal 𝑠. After receiving the signal 𝑤, she assigns probability 1 to state 𝑤𝑒𝑎𝑘.

Payoffs: The profits of each firm for all possible action pairs and any possible state are as

appear in payoffs matrices.

Strategies: 𝑆1 = {𝐹, 𝑌} and 𝑆2 = {(𝐹, 𝐹), (𝐹, 𝑌), (𝑌, 𝐹), (𝑌, 𝑌)}.

(F, F) (F, Y) (Y, F) (Y, Y)

F 1 − 2𝛼 1 − 2𝛼 1 1

Y 0 0 0 0

𝛼 < 12

s1 BR2 s2 BR1

F (F, Y) (F, F) F

Y (F, F) (F, Y) F

(Y, F) F

(Y, Y) F
NE: (F, (F, Y))

-18-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

𝛼 > 12

s1 BR2 s2 BR1

F (F, Y) (F, F) Y

Y (F, F) (F, Y) Y

(Y, F) F

(Y, Y) F

NE: (Y, (F, F)).

𝛼 = 12

s1 BR2 s2 BR1

F (F, Y) (F, F) F, Y

Y (F, F) (F, Y) F, Y

(Y, F) F

(Y, Y) F

NE: (F, (F, Y)) and (Y, (F, F)).

-19-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Example 3: Fighting an opponent of unknown strength 2

Assume now that person 2 does not either know whether person 1 is medium strength or super

strong; she assigns probability 𝛽 to person 1’s being medium strength. Each person can either

fight or yield. If player 1 is medium strength payoffs are (-1, 1) when both fight and person 2

is strong and (1, -1) when both fight and person 2 is weak. If player 1 is super strong, payoff

are (1, -1) when both fight independently whether person 2 is strong or weak. In the rest of

cases payoffs are as those in the previous game. That is, if one person fights, and other does

not, then she obtains a payoff 1. If one person does not fight, independently of the rival’s

behavior, then she obtains 0.


𝛼 1−𝛼

1: 𝑚1

𝐹 𝑌 𝐹 𝑌

𝐹 (-1, 1) (1, 0) 𝐹 (1, -1) (1, 0)

𝛽 𝛽
𝑌 (0, 1) (0, 0) 𝑌 (0, 1) (0, 0)

𝑆𝑡𝑎𝑡𝑒 𝑚𝑠 𝑆𝑡𝑎𝑡𝑒 𝑚𝑤

2: 𝑠2 2: 𝑤2
𝛼 1−𝛼

1: 𝑆1
𝐹 𝑌 𝐹 𝑌

𝐹 (1, -1) (1, 0) 𝐹 (1, -1) (1, 0)


1−𝛽 1−𝛽
𝑚1 :
𝑌 (0, 1) (0, 0) 𝑌 (0, 1) (0, 0)
-20-
𝑆𝑡𝑎𝑡𝑒 𝑆𝑠 𝑆𝑡𝑎𝑡𝑒 𝑆𝑤
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Represent the Bayesian game and identify the main elements of the Bayesian game. Obtain

the Nash equilibria to any 𝛼, 𝛽 ∈ [0,1]. (Problem 5).

1.3. Two examples concerning information

The notion of a Bayesian game may be used to study how information patterns affect the outcome

of strategic interaction. Here we consider two examples.

1.3.1. More information may hurt

A decision-maker in a single-person decision-problem cannot be worse off if she has more

information: if she wishes, she can ignore the information. In a game the same is not true: if a

player has more information and the other players know that she has more information, then she

may be worse off.


1
Consider, for example, the following two-player Bayesian game where 0 < 𝜖 < 2.

1 1
2 2
1 1 1
2 2
𝐿 𝑀 𝑅 2 𝐿 𝑀 𝑅

𝑇 (1, 2𝜖) (1, 0) (1, 3𝜖) 𝑇 (1, 2𝜖) (1, 3𝜖) (1, 0)

𝐵 (2, 2) (0, 0) (0, 3) 𝐵 (2, 2) (0, 3) (0, 0)

𝑆𝑡𝑎𝑡𝑒 𝑤1 𝑆𝑡𝑎𝑡𝑒 𝑤2

-21-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

In this game, there are two states, and neither player knows the state. Each player
1
assigns probability of 2
to each state. The expected payoffs are:
𝐿 𝑀 𝑅

𝑇 (1, 2𝜖) (1, 3/2𝜖) (1, 3/2𝜖)

𝐵 (2, 2) (0, 3/2) (0, 3/2)

(B, L) is the only Nash equilibrium.

Consider now the game in which player 2 knows what the state is. So the new game is:

1 1
2 2
1
2
𝐿 𝑀 𝑅 2 𝐿 𝑀 𝑅

𝑇 (1, 2𝜖) (1, 0) (1, 3𝜖) 𝑇 (1, 2𝜖) (1, 3𝜖) (1, 0)

𝐵 (2, 2) (0, 0) (0, 3) 𝐵 (2, 2) (0, 3) (0, 0)

𝑆𝑡𝑎𝑡𝑒 𝑤1 𝑆𝑡𝑎𝑡𝑒 𝑤2

-22-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

𝐿𝐿 𝐿𝑀 𝐿𝑅 𝑀𝐿 𝑀𝑀 𝑀𝑅 𝑅𝐿 𝑅𝑀 𝑅𝑅

𝑇 1 1 1 1 1 1 1 1 1

𝐵 2 1 1 1 0 0 1 0 0

𝐿 𝑀 𝑅 𝐿 𝑀 𝑅

𝑇 (1, 2𝜖) (1, 0) (1, 3𝜖) 𝑇 (1, 2𝜖) (1, 3𝜖) (1, 0)

𝐵 (2, 2) (0, 0) (0, 3) 𝐵 (2, 2) (0, 3) (0, 0)

NE: (T, (R, M))

Therefore, player 2 is worse off when she knows the state than when she does not know

the state.

-23-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

1.3.2. Infection

The notion of a Bayesian game may be used to model not only situations in which players are

uncertain about the others’ preferences, but also situations in which they are uncertain about

each others’ knowledge. Consider the next game.

Example 4: Infection

3 1
1 3 4 1 1 4
4 4
𝐿 𝑅 2 𝐿 𝑅 2 𝐿 𝑅

𝐿 (2, 2) (0, 0) 𝐿 (2, 2) (0, 0) 𝐿 (2, 2) (0, 0)

𝑅 (3, 0) (1, 1) 𝑅 (0, 0) (1, 1) 𝑅 (0, 0) (1, 1)

𝑆𝑡𝑎𝑡𝑒 𝛼 𝑆𝑡𝑎𝑡𝑒 𝛽 𝑆𝑡𝑎𝑡𝑒 𝛾

Note that player 2’s preferences are the same in all three states, and player 1’s preferences are

the same in states 𝛽 and 𝛾. In particular, in state 𝛾, each player knows the other player’s

preferences, and player 2 knows that player 1 knows her preferences. The defect in the

players’ information in state 𝛾 is that player 1 does not know whether player 2 knows her

preferences: player 1 knows only that the state is either 𝛽 or 𝛾, and in state 𝛽 player 2 does

not know whether the state is 𝛼 or 𝛽, and hence does not know player 1’s preferences

(because player 1’s preferences in these two states differ).

There are two types of player 1 (the type who knows that the state is 𝛼 and the type who

knows that the state is 𝛽 or 𝛾) and two types of player 2 (the type who knows that the state is

-24-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

𝛼 or 𝛽 and the type who knows that the state is 𝛾). We next obtain the best response of each

type of each player.

𝑡1𝛼 :

(L, L) (L, R) (R, L) (R, R)

L 2 2 0 0

R 3 3 1 1

𝑡1𝛽𝑜𝑟𝛾 :

(L, L) (L, R) (R, L) (R, R)

3 1
L 2 0
2 2

R 1 3 1
0
4 4

𝑡2𝛼𝑜𝑟𝛽 : 𝑡2𝛾 :
L R

L R
(L, L) 2 0
(L, L) 2 0

(L, R) 3 1
2 4 (L, R) 0 1

1 3
(R, L)
2 4 (R, L) 2 0

(R, R) 0 1 (R, R) 0 1

-25-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

s1 BR2 s2 BR1

(L, L) (L, L) (L, L) (R, L)

(L, R) (L, R) (L, R) (R, L)


(R, L)
(R, L) (R, L) (R, R)
(R, R) (R, R) (R, R) (R, R)
NE: ((R, R) (R, R))

-26-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Example 5: Infection 2

3 1
1 3 4 4 1 1
1 1 3
2 4 4 4 4
𝐿 𝑅 𝐿 𝑅 2 𝐿 𝑅 𝐿 𝑅

𝐿 (2, 2) (0, 0) 𝐿 (2, 2) (0, 0) 𝐿 (2, 2) (0, 0) 𝐿 (2, 2) (0, 0)

𝑅 (3, 0) (1, 1) 𝑅 (0, 0) (1, 1) 𝑅 (0, 0) (1, 1) 𝑅 (0, 0) (1, 1)

𝑆𝑡𝑎𝑡𝑒 𝛼 𝑆𝑡𝑎𝑡𝑒 𝛽 𝑆𝑡𝑎𝑡𝑒 𝛾 𝑆𝑡𝑎𝑡𝑒 𝛿

Consider state 𝛿. In this state, player 2 knows player 1’s preferences (because she knows that

the state is either 𝛾 or 𝛿, and in both states player 1’s preferences are the same). What player 2

does not know is whether player 1 knows whether player 2 knows player 1’s preferences. In

this game, we have 3 types of player 1 (the type who knows that the state is 𝛼, the type who

knows that the state is 𝛽 or 𝛾, and the type who knows that the state is 𝛿) and two types of

player 2 (the type who knows that the state is 𝛼 or 𝛽, and the type who knows that the state is

𝛾 or 𝛿).

𝑡1𝛼 :
(L, L) (L, R) (R, L) (R, R)

L 2 2 0 0

R 3 3 1 1

-27-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

𝑡1𝛽𝑜𝑟𝛾 :
(L, L) (L, R) (R, L) (R, R)

3 1
L 2 0
2 2

R 1 3 1
0
4 4
𝑡1𝛿 :
(L, L) (L, R) (R, L) (R, R)

L 2 0 2 0

R 0 1 0 1

𝑡2𝛼𝑜𝑟𝛽 : 𝑡2𝛾𝑜𝑟𝛿 :
L R

L R
(L, L, L) 2 0
(L, L, L) 2 0

(L, L, R) 2 0 3 1
(L, L, R) 2 4

3 1
(L, R, L) 1 3
2 4 (L, R, L) 2 4
3 1
(L, R, R) 2 4 0 1
(L, R, R)
1 3
(R, L, L)
2 4 (R, L, L) 2 0

(R, L, R) 1 3
3 1
2 4 (R, L, R)
2 4

(R, R, L) 0 1 1 3
(R, R, L)
2 4

(R, R, R) 0 1
(R, R, R) 0 1

-28-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

s1 BR2 s2 BR1

(L, L, L) (L, L) (L, L) (R, L, L)

(L, L, R) (L, L) (L, R) (R, L, R)

(R, L) (R, R, L)
(L, R, L) (L, R)
(R, R) (R, R, R)
(L, R, R) (L, R)

(R, L, L) (R, L)

(R, L, R) (R, L)

(R, R, L) (R, R)

(R, R, R) (R, R)

NE: ((R, R, R) (R, R))

-29-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

1.4. Illustration: Cournot’s duopoly game with imperfect information

1.4.1. Imperfect information about cost

Two firms compete in selling a homogeneous product; one firm does not know the other firm’s

cost function. We next study how this lack of information will affect the firms’ behavior.

Assume that both firms can produce the good at constant unit cost (that is, marginal cost is

constant and there are no fixed costs). Assume also that they both know that firm 1’s unit cost is 𝑐,

but only firm 2 knows its own unit cost; firm 1 believes that firm 2’s cost is 𝑐𝐿 with probability 𝜃

and 𝑐𝐻 with probability 1 − 𝜃, with 0 < 𝜃 < 1 and 𝑐𝐿 < 𝑐𝐻 . We can model this problem as a

Bayesian game.

The information structure in this game is similar to that in Example 2.1 (BoS with imperfect

information 1)

𝜃 1−𝜃
1
2: 𝑡𝐿 2: 𝑡𝐻

𝑆𝑡𝑎𝑡𝑒 𝐿 𝑆𝑡𝑎𝑡𝑒 𝐻

We next describe the Bayesian game.

Players: Firm 1 and firm 2.

States: {L, H}.

Actions: Each firm’s set of actions is the set of nonnegative outputs.

-30-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Signals: Firm 1’s signal function 𝜏1 satisfies 𝜏1 (𝐿) = 𝜏1 (𝐻) = 𝑡1 (its signal is the same in both

states; one type of firm 1); firm 2’s signal function 𝜏1 satisfies 𝜏2 (𝐿) ≠ 𝜏2 (𝐻), with 𝜏2 (𝐿) = 𝑡𝐿

and 𝜏2 (𝐻) = 𝑡𝐻 (its signal is perfectly informative of the state; two types of firm 2).

Beliefs: After receiving the (non informative) signal 𝑡1 , the single type of firm 1 assigns

probability 𝜃 to state 𝐿 and probability 1 − 𝜃 to state 𝐻. Each type of firm 2 assigns probability 1

to the single state consistent with its signal. That is, after receiving the signal 𝑡𝐿 , firm 2 assigns

probability 1 to the state 𝐿, and after receiving the signal 𝑡𝐻 , firm 2 assigns probability 1 to the

state 𝐻.

Payoffs: The firms’ Bernoulli payoffs are their profits; if the actions chosen are (𝑞1 , 𝑞2 ) and the

state is 𝐼 (either 𝐿 or 𝐻), then firm 1’s profit is [𝑝(𝑞1 + 𝑞2 ) − 𝑐]𝑞1 and firm 2’s profit is [𝑝(𝑞1 +

𝑞2 ) − 𝑐𝐼 ]𝑞2, where 𝑝(𝑞1 + 𝑞2 ) is the market price.

A Nash equilibrium of this game is a triple (𝑞1∗ , (𝑞𝐿∗ , 𝑞𝐻∗ )), where 𝑞1∗ is the output of firm 1, 𝑞𝐿∗

is the output of type 𝑡𝐿 of firm 2 (that is, firm 2 when it receives the signal 𝜏2 (𝐿)), and 𝑞𝐻∗ is

the output of type 𝑡𝐻 of firm 2 (that is, firm 2 when it receives the signal 𝜏2 (𝐻)), such that:

- 𝑞1∗ maximizes firm 1’s (expected) profit given the output 𝑞𝐿∗ of 𝑡𝐿 of firm 2 and the output 𝑞𝐻∗

of type 𝑡𝐻 of firm 2,

- 𝑞𝐿∗ maximizes the profit of type 𝑡𝐿 of firm 2 given the output 𝑞1∗ of firm 1 and

- 𝑞𝐻∗ maximizes the profit of type 𝑡𝐻 of firm 2 given the output 𝑞1∗ of firm 1.

To find a Cournot-Nash equilibrium, we first obtain the firms’ best response functions. Given

firm 1’s beliefs, its best response 𝑏1 (𝑞𝐿 , 𝑞𝐻 ) to (𝑞𝐿 , 𝑞𝐻 ) solves:

-31-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

max 𝜃[𝑝(𝑞1 + 𝑞𝐿 ) − 𝑐]𝑞1 + (1 − 𝜃)[𝑝(𝑞1 + 𝑞𝐻 ) − 𝑐]𝑞1 .


𝑞1 ≥0

Firm 2’s best response 𝑏𝐿 (𝑞1 ) to 𝑞1 when its cost is 𝑐𝐿 solves:

max[𝑝(𝑞1 + 𝑞𝐿 ) − 𝑐𝐿 ]𝑞𝐿
𝑞𝐿 ≥0

and firm 2’s best response 𝑏𝐻 (𝑞1 ) to 𝑞1 when its cost is 𝑐𝐻 solves:

max[𝑝(𝑞1 + 𝑞𝐻 ) − 𝑐𝐻 ]𝑞𝐻 .
𝑞𝐻 ≥0

A Nash equilibrium is a combination of strategies (𝑞1∗ , (𝑞𝐿∗ , 𝑞𝐻∗ )) such that:

𝑞1∗ = 𝑏1 (𝑞𝐿∗ , 𝑞𝐻∗ ), 𝑞𝐿∗ = 𝑏𝐿 (𝑞1∗ ) and 𝑞𝐻∗ = 𝑏𝐻 (𝑞1∗ ).

Example 6. Cournot imperfect information and linear demand

Consider a Cournot duopoly game where the inverse demand function is 𝑝(𝑄) = 𝛼 − 𝑄 for

𝑄 ≤ 𝛼 and 𝑝(𝑄) = 0 for 𝑄 > 𝛼. Assuming that 𝑐𝐿 and 𝑐𝐻 are such that there is a Nash

equilibrium in which all outputs are positive, obtain such equilibrium. Compare this

equilibrium with the Nash equilibrium of the (perfect information) game in which firm 1

knows that firm 2’s unit cost is 𝑐𝐿 and with the Nash equilibrium of the game in which firm 1

knows that firm 2’s unit cost is 𝑐𝐻 .

To find a Cournot-Nash equilibrium, we first obtain the firms’ best response functions. Given

firm 1’s beliefs, its best response 𝑏1 (𝑞𝐿 , 𝑞𝐻 ) to (𝑞𝐿 , 𝑞𝐻 ) solves:

max 𝜃[𝑝(𝑞1 + 𝑞𝐿 ) − 𝑐]𝑞1 + (1 − 𝜃)[𝑝(𝑞1 + 𝑞𝐻 ) − 𝑐]𝑞1 .


𝑞1 ≥0

𝜕𝜋1
=0
𝜕𝑞1

→ 𝜃[𝑝(𝑞1 + 𝑞𝐿 ) + 𝑞1 𝑝′(𝑞1 + 𝑞𝐿 ) − 𝑐] + (1 − 𝜃)[𝑝(𝑞1 + 𝑞𝐻 ) + 𝑞1 𝑝′(𝑞1 + 𝑞𝐻 ) − 𝑐] = 0

-32-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

→ 𝜃[𝛼 − 𝑞1 − 𝑞𝐿 − 𝑞1 − 𝑐] + (1 − 𝜃)[𝛼 − 𝑞1 − 𝑞𝐻 − 𝑞1 − 𝑐] = 0

→ 𝛼 − 2𝑞1 − 𝑐 − [𝜃𝑞𝐿 + (1 − 𝜃)𝑞𝐻 ] = 0

𝛼 − 𝑐 − [𝜃𝑞𝐿 + (1 − 𝜃)𝑞𝐻 ]
(𝑏1 (𝑞𝐿 , 𝑞𝐻 ) = max { , 0})
2

𝛼 − 𝑐 − [𝜃𝑞𝐿 + (1 − 𝜃)𝑞𝐻 ]
𝑏1 (𝑞𝐿 , 𝑞𝐻 ) =
2

Firm 2’s best response 𝑏𝐿 (𝑞1 ) to 𝑞1 when its cost is 𝑐𝐿 solves:

max[𝑝(𝑞1 + 𝑞𝐿 ) − 𝑐𝐿 ]𝑞𝐿
𝑞𝐿 ≥0

𝜕𝜋𝐿
= 0 → [𝑝(𝑞1 + 𝑞𝐿 ) + 𝑞𝐿 𝑝′(𝑞1 + 𝑞𝐿 ) − 𝑐𝐿 = 0 → 𝛼 − 𝑞1 − 𝑞𝐿 − 𝑞𝐿 − 𝑐𝐿 = 0
𝜕𝑞𝐿

𝛼 − 𝑐𝐿 − 𝑞1
(→ 𝑏𝐿 (𝑞1 ) = max { , 0})
2
𝛼 − 𝑐𝐿 − 𝑞1
→ 𝑏𝐿 (𝑞1 ) =
2

Firm 2’s best response 𝑏𝐻 (𝑞1 ) to 𝑞1 when its cost is 𝑐𝐻 solves:

max[𝑝(𝑞1 + 𝑞𝐻 ) − 𝑐𝐻 ]𝑞𝐻
𝑞𝐻 ≥0

𝜕𝜋𝐻
= 0 → [𝑝(𝑞1 + 𝑞𝐻 ) + 𝑞𝐻 𝑝′(𝑞1 + 𝑞𝐻 ) − 𝑐𝐻 = 0 → 𝛼 − 𝑞1 − 𝑞𝐻 − 𝑞𝐻 − 𝑐𝐻 = 0
𝜕𝑞𝐻

𝛼 − 𝑐𝐻 − 𝑞1
(→ 𝑏𝐻 (𝑞1 ) = max { , 0})
2
𝛼 − 𝑐𝐻 − 𝑞1
→ 𝑏𝐻 (𝑞1 ) =
2

A Nash equilibrium is a combination of strategies (𝑞1∗ , (𝑞𝐿∗ , 𝑞𝐻∗ )) such that:

𝛼 − 𝑐 − [𝜃𝑞𝐿∗ + (1 − 𝜃)𝑞𝐻∗ ]
𝑞1∗ = 𝑏1 (𝑞𝐿∗ , 𝑞𝐻∗ ) =
2

-33-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

𝛼 − 𝑐𝐿 − 𝑞1∗
𝑞𝐿∗ = 𝑏𝐿 (𝑞1∗ ) =
2

𝛼 − 𝑐𝐻 − 𝑞1∗
𝑞𝐻∗ = 𝑏𝐻 (𝑞1∗ ) =
2

𝛼 − 𝑐𝐿 − 𝑞1∗ 𝛼 − 𝑐𝐻 − 𝑞1∗
𝛼 − 𝑐 − [𝜃 + (1 − 𝜃) ]
𝑞1∗ = 𝑏1 (𝑞𝐿∗ , 𝑞𝐻∗ ) = 2 2
2

𝛼 − 2𝑐 + 𝜃𝑐𝐿 + (1 − 𝜃)𝑐𝐻 + 𝑞1∗


→ 𝑞1∗ =
4

𝛼 − 2𝑐 + 𝜃𝑐𝐿 + (1 − 𝜃)𝑐𝐻
→ 𝑞1∗ =
3

𝛼 − 2𝑐 + 𝜃𝑐𝐿 + (1 − 𝜃)𝑐𝐻
𝛼 − 𝑐𝐿 −
𝑞𝐿∗ = 𝑏𝐿 (𝑞1∗ ) = 3
2

2𝛼 − 3𝑐𝐿 + 2𝑐 − 𝜃𝑐𝐿 − (1 − 𝜃)𝑐𝐻


→ 𝑞𝐿∗ =
6

𝛼 − 2𝑐𝐿 + 𝑐 (1 − 𝜃)(𝑐𝐻 − 𝑐𝐿 )
→ 𝑞𝐿∗ = −
3 6

𝛼 − 2𝑐 + 𝜃𝑐𝐿 + (1 − 𝜃)𝑐𝐻
𝛼 − 𝑐𝐻 −
𝑞𝐻∗ = 𝑏𝐻 (𝑞1∗ ) = 3
2

2𝛼 − 3𝑐𝐻 + 2𝑐 − 𝜃𝑐𝐿 − (1 − 𝜃)𝑐𝐻


→ 𝑞𝐻∗ =
6

𝛼 − 2𝑐𝐻 + 𝑐 𝜃(𝑐𝐻 − 𝑐𝐿 )
→ 𝑞𝐻∗ = +
3 6

Perfect Information

If firm 1 knows that firm 2’s unit cost is 𝑐𝐿 , equilibrium outputs are:

𝛼 − 2𝑐 + 𝑐𝐿 𝛼 − 2𝑐𝐿 + 𝑐
𝑞1∗ = 𝑞2∗ =
3 3

-34-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

If firm 1 knows that firm 2’s unit cost is 𝑐𝐻 , equilibrium outputs are:

𝛼 − 2𝑐 + 𝑐𝐻 𝛼 − 2𝑐𝐻 + 𝑐
𝑞1∗ = 𝑞2∗ =
3 3

Therefore, in comparison with the perfect information case, under imperfect information the

low-cost firm 2 would produce a lower quantity and the high-cost firm 2 would produce a

greater quantity.

1.4.2. Imperfect information about both cost and information

Now assume that firm 2 does not know whether firm 1 knows firm 2’s cost. That is, suppose that

one circumstance that firm 2 believes to be possible is that firm 1 knows its cost (although in fact it

does not). Because firm 2 thinks this circumstance to be possible, we need four states to model this

situation which we call 𝐿0, 𝐻0, 𝐿1, and 𝐻1, with the following interpretation.

𝐿0: firm 2’s cost is low and firm 1 does not know whether it is low or high.

𝐻0: firm 2’s cost is high and firm 1 does not know whether it is low or high.

𝐿1: firm 2’s cost is low and firm 1 knows it is low.

𝐻1: firm 2’s cost is high and firm 1 knows it is high.

Firm 1 receives one of three possible signals, 0, 𝑙, and ℎ. The states 𝐿0 and 𝐻0 generate the signal

0 (firm 1 does not know firm 2’s cost), the state 𝐿1 generates the signal 𝑙 (firm 1 knows firm 2’s

cost is low), and the state 𝐻1 generates the signal ℎ (firm 1 knows firm 2’s cost is high). Firm 2

receives one of two possible signals, 𝐿, in states 𝐿0 and 𝐿1, and 𝐻, in states 𝐻0 and 𝐻1. Denote by

𝜃 (as before) the probability assigned by type 0 of firm 1 to firm 2’s cost being 𝑐𝐿 , and by 𝜇 the

-35-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

probability assigned by each type of firm 2 to firm 1’s knowing firm 2’s cost (the case 𝜇 = 0 is

equivalent to the one considered in subsection 1.4.1).

The information structure in this game is as follows

𝜃 1−𝜃
1: 0

1−𝜇 1−𝜇 𝐻0
𝐿0

2: L 2: H

1: l 1: h
𝜇 𝜇
𝐿1 𝐻1

A Bayesian game that models the situation is defined as follows.

Players: Firm 1 and Firm 2.

States: {𝐿0, 𝐿1, 𝐻0, 𝐻1}, where the first letter in the name of the state indicates firm 2’s cost and

the second letter indicates whether firm 1 knows (1) or does not know (0) firm 2’s cost.

Actions: Each firm’s set of actions is the set of its possible (nonnegative) outputs.

Signals: Firm 1 gets one of the signals 0, 𝑙, and ℎ, and its signal function 𝜏1 satisfies 𝜏1 (𝐿0) =

𝜏1 (𝐻0) = 0, 𝜏1 (𝐿1) = 𝑙, and 𝜏1 (𝐻1) = ℎ. Firm 2 gets the signal 𝐿 or 𝐻 and its signal function 𝜏2

satisfies 𝜏2 (𝐿0) = 𝜏2 (𝐿1) = 𝐿 and 𝜏2 (𝐻0) = 𝜏2 (𝐻1) = 𝐻.

-36-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

Beliefs: After receiving the (non informative) signal 0 firm 1 assigns probability 𝜃 to state 𝐿0 and

probability 1 − 𝜃 to state 𝐻0; after receiving the signal 𝑙 firm 1 assigns probability 1 to state 𝐿1;

after receiving the signal ℎ firm 1 assigns probability 1 to state 𝐻. After receiving the signal 𝐿 firm

2 assigns probability 𝜇 to state 𝐿1 and probability 1 − 𝜇 to state 𝐿0; after receiving the signal 𝐻

firm 2 assigns probability 𝜇 to state 𝐻1 and probability 1 − 𝜇 to state 𝐻0.

Payoff functions: The firms’ Bernoulli payoffs are their profits; if the actions chosen are (𝑞1 , 𝑞2 ),

then firm 1’s profit is [𝑝(𝑞1 + 𝑞2 ) − 𝑐]𝑞1 and firm 2’s profit is [𝑝(𝑞1 + 𝑞2 ) − 𝑐𝐿 ]𝑞2 in states 𝐿0

and 𝐿1, and [𝑝(𝑞1 + 𝑞2 ) − 𝑐𝐻 ]𝑞2 in states 𝐻0 and 𝐻1.

Example 7. Cournot, imperfect information about cost and information and linear demand

Write down the maximization problems that determine the best response function of each type

of each player. Denote by 𝑞0 , 𝑞𝑙 , and 𝑞ℎ the outputs of types 0, 𝑙, and h of firm 1, and by 𝑞𝐿

and 𝑞𝐻 the outputs of types 𝐿 and 𝐻 of firm 2. Suppose that the inverse demand function is

𝑝(𝑄) = 𝛼 − 𝑄 for 𝑄 ≤ 𝛼 and 𝑝(𝑄) = 0 for 𝑄 > 𝛼. Assuming that 𝑐𝐿 and 𝑐𝐻 are such that

there is a Nash equilibrium in which all outputs are positive, obtain such equilibrium. Check

that when 𝜇 = 0 the equilibrium output of type 0 of firm 1 is equal to the equilibrium output

of firm 1 corresponding to exercise 6, and that the equilibrium outputs of the two types of

firm 2 are the same as the ones corresponding to that exercise. Check also that when 𝜇 = 1

the equilibrium outputs of type l of firm 1 and type L of firm 2 are the same as the equilibrium

outputs when there is perfect information and the costs are c and 𝑐𝐿 , and that the equilibrium

outputs of type h of firm 1 and type H of firm 2 are the same as the equilibrium outputs when

there is perfect information and the costs are c and 𝑐𝐻 . Show that for 0 < 𝜇 < 1, the

-37-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

equilibrium outputs of type L and H of firm 2 lie between their values when 𝜇 = 0 and when

𝜇 = 1.

The best response 𝑏0 (𝑞𝐿 , 𝑞𝐻 ) of type 0 of firm 1 is the solution of:

max 𝜃[𝑝(𝑞0 + 𝑞𝐿 ) − 𝑐]𝑞0 + (1 − 𝜃)[𝑝(𝑞0 + 𝑞𝐻 ) − 𝑐]𝑞0 .


𝑞0 ≥0

𝜕𝜋0
=0
𝜕𝑞0

→ 𝜃[𝑝(𝑞0 + 𝑞𝐿 ) + 𝑞0 𝑝′(𝑞0 + 𝑞𝐿 ) − 𝑐] + (1 − 𝜃)[𝑝(𝑞0 + 𝑞𝐻 ) + 𝑞0 𝑝′(𝑞1 + 𝑞𝐻 ) − 𝑐] = 0

→ 𝜃[𝛼 − 𝑞0 − 𝑞𝐿 − 𝑞0 − 𝑐] + (1 − 𝜃)[𝛼 − 𝑞0 − 𝑞𝐻 − 𝑞0 − 𝑐] = 0

→ 𝛼 − 2𝑞0 − 𝑐 − [𝜃𝑞𝐿 + (1 − 𝜃)𝑞𝐻 ] = 0

𝛼 − 𝑐 − [𝜃𝑞𝐿 + (1 − 𝜃)𝑞𝐻 ]
(𝑏0 (𝑞𝐿 , 𝑞𝐻 ) = max { , 0})
2

𝛼 − 𝑐 − [𝜃𝑞𝐿 + (1 − 𝜃)𝑞𝐻 ]
𝑏0 (𝑞𝐿 , 𝑞𝐻 ) =
2

The best response 𝑏𝑙 (𝑞𝐿 , 𝑞𝐻 ) of type l of firm 1 is the solution of:

max[𝑝(𝑞𝑙 + 𝑞𝐿 ) − 𝑐]𝑞𝑙
𝑞𝑙 ≥0

𝜕𝜋𝑙
=0
𝜕𝑞𝑙

→ 𝑝(𝑞𝑙 + 𝑞𝐿 ) + 𝑞𝑙 𝑝′(𝑞𝑙 + 𝑞𝐿 ) − 𝑐 = 0

→ 𝛼 − 𝑞𝑙 − 𝑞𝐿 − 𝑞𝑙 − 𝑐 = 0

𝛼 − 𝑐 − 𝑞𝐿
(𝑏𝑙 (𝑞𝐿 , 𝑞𝐻 ) = max { , 0})
2

𝛼 − 𝑐 − 𝑞𝐿
𝑏𝑙 (𝑞𝐿 , 𝑞𝐻 ) =
2

-38-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

The best response 𝑏ℎ (𝑞𝐿 , 𝑞𝐻 ) of type h of firm 1 is the solution of:

max[𝑝(𝑞ℎ + 𝑞𝐻 ) − 𝑐]𝑞ℎ
𝑞ℎ ≥0

𝜕𝜋ℎ
=0
𝜕𝑞ℎ

→ 𝑝(𝑞ℎ + 𝑞𝐿 ) + 𝑞ℎ 𝑝′(𝑞ℎ + 𝑞𝐻 ) − 𝑐 = 0

→ 𝛼 − 𝑞ℎ − 𝑞𝐻 − 𝑞ℎ − 𝑐 = 0
𝛼 − 𝑐 − 𝑞𝐻
(𝑏ℎ (𝑞𝐿 , 𝑞𝐻 ) = max { , 0})
2

𝛼 − 𝑐 − 𝑞𝐻
𝑏ℎ (𝑞𝐿 , 𝑞𝐻 ) =
2

The best response 𝑏𝐿 (𝑞0 , 𝑞𝑙 , 𝑞ℎ ) of type L of firm 2 is the solution of:

max(1 − 𝜇)[𝑝(𝑞0 + 𝑞𝐿 ) − 𝑐𝐿 ]𝑞𝐿 + 𝜇[𝑝(𝑞𝑙 + 𝑞𝐿 ) − 𝑐𝐿 ]𝑞𝐿 .


𝑞𝐿 ≥0

𝜕𝜋𝐿
=0
𝜕𝑞𝐿

→ (1 − 𝜇)[𝑝(𝑞0 + 𝑞𝐿 ) + 𝑞𝐿 𝑝′(𝑞0 + 𝑞𝐿 ) − 𝑐𝐿 ] + 𝜇[𝑝(𝑞𝑙 + 𝑞𝐿 ) + 𝑞𝐿 𝑝′(𝑞𝑙 + 𝑞𝐿 ) − 𝑐𝐿 ] = 0

→ (1 − 𝜇)[𝛼 − 𝑞0 − 𝑞𝐿 − 𝑞𝐿 − 𝑐𝐿 ] + 𝜇[𝛼 − 𝑞𝑙 − 𝑞𝐿 − 𝑞𝐿 − 𝑐𝐿 ] = 0

→ 𝛼 − 2𝑞𝐿 − 𝑐𝐿 − [(1 − 𝜇)𝑞0 + 𝜇𝑞𝑙 ] = 0

𝛼 − 𝑐𝐿 − [(1 − 𝜇)𝑞0 + 𝜇𝑞𝑙 ]


𝑏𝐿 (𝑞0 , 𝑞𝑙 , 𝑞ℎ ) = max { , 0})
2

𝛼 − 𝑐𝐿 − [(1 − 𝜇)𝑞0 + 𝜇𝑞𝑙 ]


𝑏𝐿 (𝑞0 , 𝑞𝑙 , 𝑞ℎ ) =
2

The best response 𝑏𝐻 (𝑞0 , 𝑞𝑙 , 𝑞ℎ ) of type H of firm 2 is the solution of:

max(1 − 𝜇)[𝑝(𝑞0 + 𝑞𝐻 ) − 𝑐𝐻 ]𝑞𝐻 + 𝜇[𝑝(𝑞ℎ + 𝑞𝐻 ) − 𝑐𝐻 ]𝑞𝐻 .


𝑞𝐻 ≥0

-39-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

𝜕𝜋𝐻
=0
𝜕𝑞𝐻

→ (1 − 𝜇)[𝑝(𝑞0 + 𝑞𝐻 ) + 𝑞𝐻 𝑝′(𝑞0 + 𝑞𝐻 ) − 𝑐𝐻 ] + 𝜇[𝑝(𝑞ℎ + 𝑞𝐻 ) + 𝑞𝐻 𝑝′(𝑞ℎ + 𝑞𝐻 ) − 𝑐𝐻 ]

=0

→ (1 − 𝜇)[𝛼 − 𝑞0 − 𝑞𝐻 − 𝑞𝐻 − 𝑐𝐻 ] + 𝜇[𝛼 − 𝑞ℎ − 𝑞𝐻 − 𝑞𝐻 − 𝑐𝐻 ] = 0

→ 𝛼 − 2𝑞𝐻 − 𝑐𝐻 − [(1 − 𝜇)𝑞0 + 𝜇𝑞ℎ ] = 0

𝛼 − 𝑐𝐻 − [(1 − 𝜇)𝑞0 + 𝜇𝑞ℎ ]


𝑏𝐻 (𝑞0 , 𝑞𝑙 , 𝑞ℎ ) = max { , 0})
2

𝛼 − 𝑐𝐻 − [(1 − 𝜇)𝑞0 + 𝜇𝑞ℎ ]


𝑏𝐻 (𝑞0 , 𝑞𝑙 , 𝑞ℎ ) =
2

A Nash equilibrium is a strategy profile ((𝑞0∗ , 𝑞𝑙∗ , 𝑞ℎ∗ ), (𝑞𝐿∗ , 𝑞𝐻∗ )) such that:

𝛼 − 𝑐 − [𝜃𝑞𝐿∗ + (1 − 𝜃)𝑞𝐻∗ ]
𝑞0∗ = 𝑏0 (𝑞𝐿∗ , 𝑞𝐻∗ ) =
2

𝛼 − 𝑐 − 𝑞𝐿∗
𝑞𝑙∗ = 𝑏𝑙 (𝑞𝐿∗ , 𝑞𝐻∗ ) =
2

𝛼 − 𝑐 − 𝑞𝐻∗
𝑞ℎ∗ = 𝑏ℎ (𝑞𝐿∗ , 𝑞𝐻∗ ) =
2

𝛼 − 𝑐𝐿 − [(1 − 𝜇)𝑞0∗ + 𝜇𝑞𝑙∗ ]


𝑞𝐿∗ = 𝑏𝐿 (𝑞0∗ , 𝑞𝑙∗ , 𝑞ℎ∗ ) =
2

𝛼 − 𝑐𝐻 − [(1 − 𝜇)𝑞0∗ + 𝜇𝑞ℎ∗ ]


𝑞𝐻∗ = 𝑏𝐻 (𝑞0∗ , 𝑞𝑙∗ , 𝑞ℎ∗ ) =
2

𝛼 − 𝑐 − 𝑞𝐿∗
𝛼 − 𝑐𝐿 − [(1 − 𝜇)𝑞0∗ + 𝜇 ] 2𝛼 − 2𝑐𝐿 − [2(1 − 𝜇)𝑞0∗ + 𝜇(𝛼 − 𝑐 − 𝑞𝐿∗ )]
𝑞𝐿∗ = 2 =
2 4

2𝛼 − 2𝑐𝐿 − [2(1 − 𝜇)𝑞0∗ + 𝜇(𝛼 − 𝑐)]


𝑞𝐿∗ =
4−𝜇

𝛼 − 𝑐 − 𝑞𝐻∗
𝛼 − 𝑐𝐻 − [(1 − 𝜇)𝑞0∗ + 𝜇 ] 2𝛼 − 2𝑐𝐻 − [2(1 − 𝜇)𝑞0∗ + 𝜇(𝛼 − 𝑐 − 𝑞𝐻∗ )]
𝑞𝐻∗ = 2 =
2 4

-40-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

2𝛼 − 2𝑐𝐻 − [2(1 − 𝜇)𝑞0∗ + 𝜇(𝛼 − 𝑐)]


𝑞𝐻∗ =
4−𝜇

𝛼 − 𝑐 1 2𝛼 − 2𝑐𝐿 − [2(1 − 𝜇)𝑞0∗ + 𝜇(𝛼 − 𝑐)]


𝑞0∗ = − �𝜃
2 2 4−𝜇

2𝛼 − 2𝑐𝐻 − [2(1 − 𝜇)𝑞0∗ + 𝜇(𝛼 − 𝑐)]


+ (1 − 𝜃) �
4−𝜇

2(4 − 𝜇)𝑞0∗ = (𝛼 − 𝑐)(4 − 𝜇) − 2𝛼𝜃 + 2𝜃𝑐𝐿 + 2𝜃(1 − 𝜇)𝑞0∗ + 𝜃𝜇(𝛼 − 𝑐) − 2(1 − 𝜃)𝛼

+ 2(1 − 𝜃)𝑐𝐻 + 2(1 − 𝜃)(1 − 𝜇)𝑞0∗ + (1 − 𝜃)𝜇(𝛼 − 𝑐)

2(4 − 𝜇)𝑞0∗ − 2(1 − 𝜇)𝑞0∗

= (𝛼 − 𝑐)(4 − 𝜇) − 2𝛼𝜃 + 2𝜃𝑐𝐿 + 𝜇(𝛼 − 𝑐) − 2(1 − 𝜃)𝛼 + 2(1 − 𝜃)𝑐𝐻

6𝑞0∗ = 4(𝛼 − 𝑐) − 2𝛼 + 2𝜃𝑐𝐿 + 2(1 − 𝜃)𝑐𝐻

6𝑞0∗ = 2𝛼 − 4𝑐 + 2𝜃𝑐𝐿 + 2(1 − 𝜃)𝑐𝐻

𝛼 − 2𝑐 + 𝜃𝑐𝐿 + (1 − 𝜃)𝑐𝐻
𝑞0∗ =
3

1 2(1 − 𝜃)(1 − 𝜇)(𝑐𝐻 − 𝑐𝐿 )


→ 𝑞𝐿∗ = [𝛼 − 2𝑐𝐿 + 𝑐 − ]
3 4−𝜇

1 2𝜃(1 − 𝜇)(𝑐𝐻 − 𝑐𝐿 )
→ 𝑞𝐻∗ = [𝛼 − 2𝑐𝐻 + 𝑐 + ]
3 4−𝜇

1 (1 − 𝜃)(1 − 𝜇)(𝑐𝐻 − 𝑐𝐿 )
→ 𝑞𝑙∗ = [𝛼 − 2𝑐 + 𝑐𝐿 + ]
3 4−𝜇

1 𝜃(1 − 𝜇)(𝑐𝐻 − 𝑐𝐿 )
→ 𝑞ℎ∗ = [𝛼 − 2𝑐 + 𝑐𝐻 − ]
3 4−𝜇

When 𝜇 = 0

𝛼 − 2𝑐 + 𝜃𝑐𝐿 + (1 − 𝜃)𝑐𝐻
𝑞0∗ =
3

-41-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

1 (1 − 𝜃)(𝑐𝐻 − 𝑐𝐿 )
𝑞𝐿∗ = [𝛼 − 2𝑐𝐿 + 𝑐 − ]
3 2

1 𝜃(𝑐𝐻 − 𝑐𝐿 )
𝑞𝐻∗ = [𝛼 − 2𝑐𝐻 + 𝑐 + ]
3 2

1 (1 − 𝜃)(𝑐𝐻 − 𝑐𝐿 )
𝑞𝑙∗ = [𝛼 − 2𝑐 + 𝑐𝐿 + ]
3 4−𝜇

1 𝜃(𝑐𝐻 − 𝑐𝐿 )
𝑞ℎ∗ = [𝛼 − 2𝑐 + 𝑐𝐻 − ]
3 4

So that 𝑞0∗ is equal to the equilibrium output of firm 1 in exercise 6, and 𝑞𝐿∗ and 𝑞𝐻∗ are the

same as the equilibrium outputs of the two types of firm 2 in that exercise.

When 𝜇 = 1, then

𝛼 − 2𝑐 + 𝜃𝑐𝐿 + (1 − 𝜃)𝑐𝐻
𝑞0∗ =
3
1
𝑞𝐿∗ = [𝛼 − 2𝑐𝐿 + 𝑐]
3
1
𝑞𝐻∗ = [𝛼 − 2𝑐𝐻 + 𝑐]
3

1
𝑞𝑙∗ = [𝛼 − 2𝑐 + 𝑐𝐿 ]
3
1
𝑞ℎ∗ = [𝛼 − 2𝑐 + 𝑐𝐻 ]
3

So that 𝑞𝑙∗ and 𝑞𝐿∗ are the same as the equilibrium outputs when there is perfect information

and the costs are c and 𝑐𝐿 , and 𝑞ℎ∗ and 𝑞𝐻∗ are the same as the equilibrium outputs when there

is perfect information and the costs are c and 𝑐𝐻 .

For an arbitrary value of 𝜇, we have:

1 2(1 − 𝜃)(1 − 𝜇)(𝑐𝐻 − 𝑐𝐿 )


𝑞𝐿∗ = [𝛼 − 2𝑐𝐿 + 𝑐 − ]
3 4−𝜇

-42-
Uncertainty and Contracts Chapter 1. Bayesian Games in Normal Form

1 2𝜃(1 − 𝜇)(𝑐𝐻 − 𝑐𝐿 )
𝑞𝐻∗ = [𝛼 − 2𝑐𝐻 + 𝑐 + ]
3 4−𝜇

To show that for 0 < 𝜇 < 1 the values of these variables are between their values when 𝜇 = 0

and when 𝜇 = 1, we need to show that

2(1 − 𝜃)(1 − 𝜇)(𝑐𝐻 − 𝑐𝐿 ) (1 − 𝜃)(𝑐𝐻 − 𝑐𝐿 )


0≤ ≤ ]
4−𝜇 2

2𝜃(1 − 𝜇)(𝑐𝐻 − 𝑐𝐿 ) 𝜃(𝑐𝐻 − 𝑐𝐿 )


0≤ ≤
4−𝜇 2

Which holds since 𝑐𝐻 ≥ 𝑐𝐿 , 𝜃 ≥ 0 and 0 ≤ 𝜇 ≤ 1.

-43-
236

3.1/ Introduction

Introduction:
The model of a strategic game suppresses the sequential structure of decision-making:each decision-maker chooses her
plan of action once and for all; she is committed to this plan, which she cannot modify. The model of an extensive game
allows us to study situations in which each decision-maker is free to change her mind as events unfold: the sequential
structure of decision-making is explicitly described. To describe an extensive game with perfect information, we need
to specify: the set of players and their preferences (as for a strategic game) ; the order of the players’ moves and the
actions each player may take at each point, by specifying the set of all sequences of actions that can possibly occur ; the
player who moves at each point in each sequence. Each possible sequence of actions is a terminal history ; the function
that gives the player who moves at each point in each terminal history is the player function. An extensive game has
four components: 1 players; 2 terminal histories; 3 player function; 4 preferences for the players. Example: An
incumbent faces the possibility of entry by a challenger; The challenger may enter or not; If it enters, the incumbent
may either acquiesce or fight. We may model this situation as an extensive game with perfect information in which the
terminal histories are (In, Acquiesce), (In, Fight), and Out; the player function assigns the challenger to the start of the
game and the incumbent to the history In.

Actions: At the start of an extensive game, and after any sequence of events, a player chooses an action. The sets of
actions available to the players are not given explicitly. What is specified is the set of terminal histories and the player
function, from which we can deduce the available sets of actions. In the entry game example: the actions available to the
challenger at the start of the game are In and Out, because these actions (and no others) begin terminal histories; the
actions available to the incumbent are Acquiesce and Fight, because these actions (and no others) follow In in terminal
histories.

Terminal histories:
The terminal histories of a game are specified as a set of sequences. Not every set of sequences is a legitimate set of
terminal histories. If (C, D) is a terminal history, C should not be specified as a terminal history: after C is chosen at the
start of the game, some player may choose D, so that the action C does not end the game. A sequence that is a proper
subhistory of a terminal history cannot itself be a terminal history. This the only restriction we need to impose on a set
of sequences so that the set be interpretable as a set of terminal histories.

Subhistories:
Define the subhistories of a finite sequence (a 1 , a 2 , . . . , a k ) of actions to be 1 the empty history consisting of no
actions, denoted ∅, and 2 all sequences of the form (a 1 , a 2 , . . . , a m), where 1 ≤ m ≤ k. Similarly, define the subhistories
of an infinite sequence (a 1 , a 2 , . . .) of actions to be 1 the empty history consisting of no actions, denoted ∅, 2 all
sequences of the form (a 1 , a 2 , . . . , a m), where m ≥ 1, and 3 the entire sequence (a 1 , a 2 , . . .). A subhistory not equal
to the entire sequence is called a proper subhistory. A sequence of actions that is a subhistory of some terminal history is
called simply a history. In the entry game example: The subhistories of (In, Acquiesce) are the empty history ∅ and the
sequences In and (In, Acquiesce). The proper subhistories are the empty history and the sequence In.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
237

3.2/ Representation of an extensive game

Definition:
An extensive game with perfect information consists of a set of players; a set of sequences (terminal histories) with
the property that no sequence is a proper subhistory of any other sequence; a function (the player function) that assigns
a player to every sequence that is a proper subhistory of some terminal history; for each player, preferences over the set
of terminal histories. The set of terminal histories is the set of all sequences of actions that may occur. The player assigned
by the player function to any history h is the player who takes an action after h. We may specify a player’s preferences
by giving a payoff function that represents them.

Example: Entry game


Suppose that the best outcome for the challenger is that it enters and the incumbent acquiesces; the worst outcome for
the challenger is that it enters and the incumbent fights; the best outcome for the incumbent is that the challenger stays
out; and the worst outcome for the incumbent is that it enters and there is a fight. Then the situation may be modeled as
an extensive game with perfect information.

Players: The challenger and the incumbent.


Terminal histories: (In, Acquiesce), (In, Fight), and Out.
Player function: P(∅) = Challenger and P(In) = Incumbent.
Preferences:
Challenger’s preferences are represented by the payoff function u1:
u1(In, Acquiesce) = 2 u1(Out) = 1 u1(In, Fight) = 0 .
Incumbent’s preferences are represented by the payoff function u2:
u2(In, Acquiesce) = 1 u2(Out) = 2 u2(In, Fight) = 0 .

Representation of an extensive game:


The Entry game is readily illustrated in a
diagram:
The small circle at the top represents the empty history (the
start of the game). The label above a node indicatesthe
player who chooses an action. The branches represent the
player’s choices. The pair of numbers beneath each
terminal history gives the players’ payoffs to that history.
Actions: The sets of actions available to the players at their various moves are not directly specified. These
can be deduced from the set of terminal histories and the player function. If, for some nonterminal history h,
the sequence (h, a) is a history, then a is one of the actions available to the player who moves after h. Thus the
set of all actions available to the player who moves after h is A(h) = {a : (h, a) is a history} .

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
238

3.3/ Backward Induction

For example, for the entry game: The histories are ∅, In, Out,(In,
Acquiesce), and (In, Fight). The set of actions available to the challenger
who moves at the start of the game is A(∅) = {In, Out}. The set of actions
available to the incumbent who moves after the history In is A(In) =
{Acquiesce, Fight}.

Finiteness: Terminal histories are allowed to be infinitely long. If the


length of the longest terminal history is in fact finite, we say that the
game has a finite horizon. A game with a finite horizon may have
infinitely many terminal histories: some player might have infinitely many actions after some history. If a game has a
finite horizon and finitely many terminal histories we say it is finite. A game that is not finite cannot be represented in a
diagram!

Perfect information:
An extensive game with perfect information models a situation in which each player, when choosing an action, 1 knows
all actions chosen previously (has perfect information), and 2 always moves alone (rather than simultaneously with
other players). The model encompasses several situations: A race (e.g., between firms developing a new technology) is
modeled as an extensive game in which the parties alternately decide how much effort to expend. Parlor games such as
chess, in which there are no random events, the players move sequentially, and each player always knows all actions
taken previously, may also be modeled as extensive games with perfect information.

Backward induction:
In the entry game: The challenger will enter and the incumbent will subsequently acquiesce. The challenger can reason
that if it enters then the incumbent will acquiesce, because doing so is
better for the incumbent than fighting. Given that the incumbent will
respond to entry in this way, the challenger is better off entering.

This line of argument is called backward induction: A player who


has to move deduces, for each of her possible actions, the actions that the
players (including herself) will subsequently rationally take, and chooses
the action that yields the terminal history she most prefers.

Backward induction cannot be applied to every


extensive game with perfect information: If the
challenger enters, the incumbent is indifferent between
acquiescing and fighting. Backward induction does not tell
the challenger what the incumbent will do in this case.
Games with infinitely long histories present another
difficulty for backward induction.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
239

3.4/ Strategies and outcomes

Nash equilibrium: Another approach to defining equilibrium takes off from the notion of Nash equilibrium: It seeks
to model patterns of behavior that can persist in a steady state. The resulting notion of equilibrium applies to all extensive
games with perfect information. In games in which backward induction is well-defined, this approach turns out to lead
to the backward induction outcome, so that there is no conflict between the two ideas.

Strategies: A player’s strategy specifies the action the player chooses for every history after which it is her turn to
move. A strategy of player i in an extensive game with perfect information is a function that assigns to each history h
after which it is player i’s turn to move (i.e. P(h) = i, where P is the player function) an action in A(h) (the set of actions
available after h).

an example,

Player 1 moves only at the start of the game (i.e. after the empty history), when
the actions available to her are C and D. Thus she has two strategies: one that
assigns C to the empty history, and one that assigns D to the empty history. Player
2 moves after both the history C and the history D. After C the actions available
to her are E and F, and after D the actions available to her are G and H Thus a
strategy of player 2 is a function that assigns either E or F to the history C, and either G or
H to the history D. That is, player 2 has four strategies: EG, EH, FG, FH.

Each of player 2’s strategies may be interpreted as a plan of action or contingency plan: it specifies what player 2 does if
player 1 chooses C, and what she does if player 1 chooses D. A player’s strategy provides sufficient information to
determine her plan of action: the actions she intends to take, whatever the other players do.

Outcomes:
The outcome of the strategy pair (DG, E) is the terminal history D. The outcome
of (CH, E) is the terminal history (C, E, H). Note that the outcome O(s) of the
strategy profile s depends only on the players’ plans of action, not their full
strategies. To determine O(s) we do not need to refer to any component of any
player’s strategy that specifies her actions after histories precluded by that
strategy.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
240

3.5/ Nash equilibrium

Nash equilibrium: a strategy profile from which no player wishes to deviate, given the other players’ strategies. One
way to find the Nash equilibria of an extensive game in which each player has finitely many strategies is to 1 list each
player’s strategies, 2 find the outcome of each strategy profile, and 3 analyze this information as for a strategic game.

Strategic form: We construct the following strategic game, known as the strategic form of the extensive game:
Players: the set of players in the extensive game. Actions: Each player’s set of actions is her set of strategies in the
extensive game. Preferences: Each player’s payoff to each action profile is her payoff to the terminal history generated
by that action profile in the extensive game. The set of Nash equilibria of any extensive game with perfect information is
the set of Nash equilibria of its strategic form.

Example: the entry game


the challenger has two strategies, In and Out the incumbent has
two strategies, Acquiesce and Fight.
Strategic form of the game:
Incumbent

Acquiesce Fight
In (2,1) (0,0)
Challenger
Out (1,2) (1,2)

Two Nash equilibria: (In, Acquiesce) and (Out, Fight) Equilibrium (In, Acquiesce) is the pattern of behavior isolated by
backward induction. In equilibrium (Out, Fight) the challenger always chooses Out; this strategy is optimal given the
incumbent’s strategy to fight in the event of entry; incumbent’s strategy Fight is optimal given the challenger’s strategy,
thus neither player can increase its payoff by choosing a different strategy, given the other player’s strategy.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
241

3.6/ Subgame perfect equilibrium

Subgame perfect equilibrium: The notion of Nash equilibrium ignores the sequential structure of an extensive
game. It treats strategies as choices made once and for all before play begins. Subgame perfect equilibrium is a notion of
equilibrium that models a robust steady state. Each player’s strategy is optimal, given the other players’ strategies, not
only at the start of the game, but after every possible history.

Subgames:
We first define the notion of a subgame: For any nonterminal history h, the
subgame following h is the part of the game that remains after h has occurred.
For example, the subgame following the history In in the entry game is the game
in which the incumbent is the only player, and there are two terminal histories,
Acquiesce and Fight. The subgame following the empty history ∅ is the entire
game. Every other subgame is called a proper subgame. Because there is a
subgame for every nonterminal history, the number of subgames is equal to the
number of nonterminal histories.

Example:
The Figure game has three nonterminal histories (the empty history, C, and
D), and hence three subgames: the whole game (the part of the game
following the empty history), the game following the history C, and the game
following the history D.

Subgame perfect equilibrium: In an equilibrium that corresponds to a perturbed steady state in which every history
sometimes occurs, the players’ behavior must correspond to a steady state in every subgame, not only in the whole game.
Definition: A subgame perfect equilibrium is a strategy profile s ∗ with the property that in no subgame can any player
i do better by choosing a strategy different from s ∗ i , given that every other player j adheres to s ∗ j ,

The Nash equilibrium (Out, Fight) of the entry game is not a subgame perfect equilibrium: in the subgame
following the history In, the strategy Fight is not optimal for the
incumbent, since the incumbent is better off choosing Acquiesce than it is
choosing Fight The Nash equilibrium (In, Acquiesce) is a subgame
perfect equilibrium: each player’s strategy is optimal, given the other
players strategy, both in the whole game, and in the subgame following
the history In.

Subgame perfect equilibrium and Nash equilibrium: In a subgame perfect equilibrium every player’s strategy
is optimal, in particular, after the empty history. Thus: Every subgame perfect equilibrium is a Nash equilibrium. A
subgame perfect equilibrium generates a Nash equilibrium in every subgame. Further, any strategy profile that generates
a Nash equilibrium in every subgame is a subgame perfect equilibrium.

DEFINITION: A subgame perfect equilibrium is a strategy profile that induces a Nash equilibrium in every
subgame.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
242

3.6/ Subgame perfect equilibrium

EXAMPLE: The Entry game


The Nash equilibrium (In, Acquiesce) is a subgame perfect
equilibrium because 1 it is a Nash equilibrium, so that at the start of
the game the challenger’s strategy In is optimal, given the incumbent’s
strategy Acquiesce, and 2 after the history In, the incumbent’s strategy
Acquiesce in the subgame is optimal.

Example: variant of the entry game


Two Nash equilibria, (In, Acquiesce) and (Out, Fight). Both of these equilibria are subgame perfect equilibria, because
after the history In both Fight and Acquiesce are optimal for the incumbent.

Interpretation of subgame perfect equilibria: A subgame perfect equilibrium of an extensive game


corresponds to a steady state in which all players, on rare occasions, take nonequilibrium actions, so that after long
experience each player forms correct beliefs about the other players’ entire strategies, and thus knows how the other
players will behave in every subgame. Given these beliefs, no player wishes to deviate from her strategy either at the
start of the game or after any history. This interpretation does not require a player to know the other players’ preferences,
or to think about the other players’ rationality. It entails interpreting a strategy as a plan specifying a player’s actions not
only after histories consistent with the strategy, but also after histories that result when the player chooses arbitrary
alternative actions, perhaps because she makes mistakes or deliberately experiments.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
243

3.7/ Finding subgame perfect equilibria

Finite horizon games: We can find the subgame perfect equilibria by finding the Nash equilibria and checking whether
each of these equilibria is subgame perfect. In a game with a finite horizon the set of subgame perfect equilibria may be
found more directly by using an extension of the procedure of backward induction.

Backward induction: The length of a subgame is the length of the longest history in the subgame. The procedure
of backward induction works as follows: 1 We start by finding the optimal actions of the players who move in the
subgames of length 1 (the “last” subgames). 2 Taking these actions as given, we find the optimal actions of the players
who move first in the subgames of length 2. 3 We continue working back to the beginning of the game, at each stage k
finding the optimal actions of the players who move at the start of the subgames of length k, given the optimal actions
we have found in all shorter subgames.

Backward induction: an example


Subgames of length 1: The game has two such subgames, in both of which
player 2 moves. In the subgame following C, player 2’s optimal action is
E. In the subgame following D, her optimal action is H.

Subgames of length 2: The game has one such subgame, namely the entire
game, at the start of which player 1 moves. Given the optimal actions in
the subgames of length 1: player 1’s choosing C at the start of the game
yields her a payoff of 2, whereas her choosing D yields her a payoff of 1. Thus player 1’s optimal action at the start of
the game is C.

⇒ The game has no subgame of length greater than 2, so the procedure of backward induction yields the strategy pair
(C, EH).

Extension of backward induction: In any game in which the procedure selects a single action for the player
who moves at the start of each subgame, the strategy profile selected is the unique subgame perfect equilibrium of the
game (a complete proof is not trivial!). What happens in a game in which at the start of some subgames more than one
action is optimal? An extension of the procedure of backward induction locates all subgame perfect equilibria. This
extension traces back separately the implications for behavior in the longer subgames of every combination of optimal
actions in the shorter subgames.

Procedure of backward induction:


Backward induction may be described compactly for an arbitrary game as follows:
Step 1: Find, for each subgame of length 1, the set of optimal actions of the player who moves first. Index the subgames
by j, and denote by S ∗ j (1) the set of optimal actions in subgame j. (If the player who moves first in subgame j has a
unique optimal action, then Sj (1) contains a single action.).

Step 2: For each combination of actions consisting of one from each set S ∗ j (1), find, for each subgame of length two,
the set of optimal actions of the player who moves first. The result is a set of strategy profiles for each subgame of length
two. Denote by S ∗ ` (2) the set of strategy profiles in subgame `.

Step 3, 4, . . . Continue by examining successively longer subgames until reaching the start of the game. At each stage
k, for each combination of strategy profiles consisting of one from each set S ∗ p (k − 1) constructed in the previous
stage, find, for each subgame of length k, the set of optimal actions of the player who moves first, and hence a set of
strategy profiles for each subgame of length k.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
244

3.7/ Finding subgame perfect equilibria

Subgame perfect equilibrium and backward induction: The set of strategy profiles that the procedure of
backward induction yields for the whole game is the set of subgame perfect equilibria of the game.
Proposition: The set of subgame perfect equilibria of a finite horizon extensive game with perfect information is equal
to the set of strategy profiles isolated by the procedure of backward induction. Note: A complete proof is not trivial.

Existence of subgame perfect equilibrium: Proposition: Every finite extensive game with perfect
information has a subgame perfect equilibrium.
Proof: A finite game not only has a finite horizon, but also a finite number of terminal histories. The player
who moves first in any subgame has finitely many actions; at least one action is optimal. Thus in such a game
the procedure of backward induction isolates at least one strategy profile. Using the Proposition stated before,
we conclude that every finite game has a subgame perfect equilibrium. This existence result does not claim
that a finite extensive game has a single subgame perfect equilibrium. A finite horizon game in which some
player does not have finitely many ac- tions after some history may or may not possess a subgame perfect
equilibrium.
A simple example of a game that does not have a subgame perfect equilibrium: Consider the
trivial game in which a single player chooses a number less than 1 and receives a payoff equal to the
number she chooses. There is no greatest number less than one, so the single player has no optimal
action. Thus the game has no subgame perfect equilibrium.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
245

3.8/ Illustrations

Illustrations: 1) The ultimatum game 2) The holdup game 3) Stackelberg’s duopoly game 4) buying
votes 5) Ticktacktoe and Chess

The ultimatum game: Bargaining over the division of a pie may naturally be modeled as an extensive game: Two
people use the following procedure to split $c: Person 1 offers person 2 an amount of money up to $c. If 2 accepts this
offer then 1 receives the remainder of the $c. If 2 rejects the offer then neither person receives any payoff. Each person
cares only about the amount of money she receives, and (naturally!) prefers to receive as much as possible. Assume that
the amount person 1 offers can be any number, not necessarily an integral number of cents. Then the procedure can be
modeled by an extensive game, known as the ultimatum game.
Players: The two people.
Terminal histories: The set of sequences (x, Z), where x is a number with 0 ≤ x ≤ c and Z is either Y (“yes, I accept”)
or N (“no, I reject”).
Player function: P(∅ = 1 and P(x) = 2 for all x.
Preferences: Preferences are represented by payoffs equal to the amounts of money one receives. For the terminal history
(x, Y ) person 1 receives c − x and person 2 receives x. For the terminal history (x, N) each person receives 0.

The game has a finite horizon, so we can use backward induction to find its subgame perfect equilibria. Subgames of
length 1: person 2 either accepts or rejects an offer of person 1. For every possible offer of person 1, there is such a
subgame. In the subgame that follows an offer x of person 1 for which x > 0, person 2’s optimal action is to accept (if
she rejects, she gets nothing). In the subgame that follows the offer x = 0, person 2 is indifferent between accepting and
rejecting. Thus in a subgame perfect equilibrium person 2’s strategy either accepts all offers (including 0), or accepts all
offers x > 0 and rejects the offer x = 0.

Now consider the whole game: for each possible subgame perfect equilibrium strategy of person 2, we need to find the
optimal strategy of person 1. If person 2 accepts all offers (including 0), then person 1’s optimal offer is 0 (which yields
her the payoff c). If person 2 accepts all offers except zero, then no offer of person 1 is optimal! No offer x > 0 is optimal,
because the offer x/2 (for example) is better, given that person 2 accept both offers. An offer of 0 is not optimal because
person 2 rejects it, leading to a payoff of 0 for person 1, who is thus better off offering any positive amount less than c.
Conclusion: The only subgame perfect equilibrium of the game is the strategy pair in which person 1 offers 0 and person
2 accepts all offers. In this equilibrium, person 1’s payoff is c and person 2’s payoff is zero.

The holdup game: Before engaging in an ultimatum game in which she may accept or reject an offer of person 1,
person 2 takes an action that affects the size c of the pie to be divided. She may exert little effort, resulting in a small pie,
of size cL, or great effort, resulting in a large pie, of size cH She dislikes exerting effort. Specifically, assume that her
payoff is xE if her share of the pie is x, where E = L if she exerts little effort and E = H > L if she exerts great effort. The
extensive game that models this situation is known as the holdup game.
Subgame perfect equilibrium: Each subgame that follows person 2’s choice of effort is an ultimatum game, and
thus has a unique subgame perfect equilibrium, in which person 1 offers 0 and person 2 accepts all offers. Now consider
person 2’s choice of effort at the start of the game. If she chooses L then her payoff, given the outcome in the following
subgame, is L. If she chooses H then her payoff is H. Consequently she chooses L Thus the game has a unique subgame
perfect equilibrium, in which person 2 exerts little effort and person 1 obtains all of the resulting small pie. This
equilibrium does not depend on the values of cL, cH, L, and H (given that H > L). Even if cH is much larger than cL, but
H is only slightly larger than L, person 2 exerts little effort in the equilibrium, although both players could be much better
off if person 2 were to exert great effort and person 2 were to obtain some of the extra pie. No such superior outcome is
sustainable in an equilibrium because person 2, having exerted great effort, may be “held up” for the entire pie by person
1.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
246

3.8.1/ Stackelberg’s duopoly game

Stackelberg’s duopoly game: Consider a market in which there are two firms, both producing the same good.
Firm i’s cost of producing qi units of the good is Ci(qi). The price at which output is sold when the total output is Q is Pd
(Q). Each firm’s strategic variable is output, but the firms make their decisions sequentially, rather than simultaneously:
one firm chooses its output, then the other firm does so, knowing the output chosen by the first firm. This situation can
be modeled by the Stackelberg’s duopoly game.
General model:
Players: The two firms.
Terminal histories: The set of all sequences (q1, q2) of outputs for the firms (each qi is a nonnegative number).
Player function: P(∅) = 1 and P(q1) = 2 for all q1.
Preferences: The payoff of firm i to the terminal history (q1, q2) is its profit qiPd (q1 + q2) − Ci(qi) for i = 1, 2 .
Note: Firm 1 moves at the start of the game, thus a strategy of firm 1 is simply an output. Firm 2 moves after
every history in which firm 1 chooses an output, thus a strategy of firm 2 is a function that associates an output
for firm 2 with each possible output of firm 1.
Backwards induction: The game has a finite horizon, so we may use backward induction: Suppose that for each
output q1 of firm 1 there is one output b2(q1) of firm 2 that maximize its profit. Then in any subgame perfect equilibrium,
firm 2’s strategy is b2. Given the strategy of firm 2, when firm 1 chooses the output q1, firm 2 chooses the output b2(q1),
resulting in a total output of qi + b2(q1), and a price of Pd (q1 + b2(q1)). Thus firm 1’s output in a subgame perfect
equilibrium is a value of q1 that maximizes q1Pd (q1 + b2(q1)) − C1(q1) . Suppose that there is one such value of q1,
denote q ∗ 1 . We conclude: If firm 2 has a unique best response b2(q1) to each output q1, and firm 1 has a unique best
action q ∗ 1 , given firm 2’s best responses, then the subgame perfect equilibrium of the game is q ∗ 1 , b2). The output
chosen by firm 2, given firm 1’s equilibrium strategy, is q ∗ 2 = b2(q ∗ 1 ). When firm 1 chooses any output q1, the
outcome, given that firm 2 uses its equilibrium strategy, is the pair of outputs (q1, b2(q1)). As firm 1 varies its output,
the outcome varies along firm 2’s best response function b2 Thus we can characterize the subgame perfect equilibrium
outcome (q ∗ 1 , q ∗ 2 ) as the point on firm 2’s best response function that maximizes firm 1’s profit.
Example: constant unit and linear inverse demand

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
247

3.8.1/ Stackelberg’s duopoly game

Example: constant unit and linear inverse demand


We conclude:

Buying votes: A legislature has k members, where k is an odd number. Two rival bills, X and Y , are being considered.
The bill that attracts the votes of a majority of legislators will pass. Interest group X favors bill X, whereas interest group
Y favors bill Y . Each group wishes to entice a majority of legislators to vote for its favorite bill. First interest group X
gives an amount of money (possibly zero) to each legislator, then interest group Y does so. Each interest group wishes to
spend as little as possible. Group X values the passing of bill X at VX > 0 and the passing of bill Y at zero, and group Y
values the passing of bill Y at VY > 0 and the passing of bill X at zero. Each legislator votes for the favored bill of the
interest group that offers her the most money; a legislator to whom both groups offer the same amount of money votes
for bill Y (an arbitrary simplifying assumption). For example, if k = 3, the amounts offered to the legislators by group X
are x = (100, 50, 0), and the amounts offered by group Y are y = (100, 0, 50), then legislators 1 and 3 vote for Y and
legislator 2 votes for X, so that Y passes. This situation can be modeled as an extensive game with perfect information.
Extensive game:
Players: The two interest groups, X and Y .
Terminal histories: The set of all sequences (x, y), where x is a list of payments to legislators made by interest
group X and y is a list of payments to legislators made by interest group Y (x and y are lists of
k nonnegative integers).
Player function: P(∅) = X and P(x) = Y for all x.
Preferences: The preferences of interest group X are represented by the payoff function,

where bill Y passes after the terminal history (x, y) iff the number of components of y that are at least equal to the
corresponding components of x is at least 1/2(k + 1) (a bare majority). The preferences of Y are represented by the
analogous function.

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
248

3.8.1/ Stackelberg’s duopoly game

Example 1: Suppose that k = 3 and VX = VY = 300. The most group X is willing to pay to get bill X passed is 300.
For any payments it makes to the three legislators that sum to at most 300, two of the payments sum to at most 200, so
that if group Y matches these payments it spends less than VY = 300 and gets bill Y passed. Thus in any subgame perfect
equilibrium group X makes no payments, group Y makes no payments, and (given the tie-breaking rule) bill Y is passed.
Example 2: Now suppose k = 3, VX = 300, and VY = 100. By paying each legislator more than 50, group X makes
matching payments by group Y unprofitable: only by spending more than VY = 100) can group Y cause bill Y to be
passed. However, there is no subgame perfect equilibrium in which group X pays each legislator more than 50, because
it can always pay a little less and still prevent group Y from profitably matching. In the only subgame perfect equilibrium
group X pays each legislator exactly 50, and group Y makes no payments. Given group X’s action, group Y is indifferent
between matching X’s payments (so that bill Y is passed), and making no payments. However, there is no subgame
perfect equilibrium in which group Y matches group X’s payments, because then group X could increase its payments a
little, making matching payments by group Y unprofitable.
Subgame perfect equilibria: For arbitrary values of the parameters the subgame perfect equilibrium outcome takes
one of the forms in these two examples: 1 either no payments are made and bill Y is passed, or 2 group X makes payments
that group Y does not wish to match, group Y makes no payments, and bill X is passed. To find the subgame perfect
equilibria in general, we may use backward induction. First consider group Y ’s best response to an arbitrary strategy
x of group X. Let µ = 1/2(k + 1) and denote by mx the sum of the smallest µ components of x. If mx < VY then group Y
can buy off a bare majority of legislators for less than VY , so that its best response to x is to match group X’s payments
to the µ legislators to whom group X’s payments are smallest. The outcome is that bill Y is passed. If mx > VY then the
cost to group Y of buying off any majority of legislators exceeds VY , so that group Y ’s best response to x is to make no
payments; the outcome is that bill X is passed. If mx = VY then both the actions in the previous two cases are best
responses by group Y to x. We conclude that group Y ’s strategy in a subgame perfect equilibrium has the following
properties: After a history x for which mx < VY , group Y matches group X’s payments to the µ legislators to whom X’s
payments are smallest. After a history x for which mx > VY , group Y makes no payments. After a history x for which
mx = VY , group Y either makes no payments or matches group X’s payments to the µ legislators to whom X’s payments
are smallest.

Given the properties of group Y ’s subgame perfect equilibrium strategy, what should X do? If it chooses a list of
payments x for which mx < VY then group Y matches its payments to a bare majority of legislators, and bill Y passes. If
it reduces all its payments, the same bill is passed. Thus the only list of payments x with mx < VY that may be optimal
is (0, . . . , 0). If it chooses a list of payments x with mx > VY then group Y makes no payments, and bill X passes. If it
reduces all its payments a little (keeping the payments to every bare majority greater than VY ), the outcome is the same.
Thus no list of payments x for which mx > VY is optimal.
Conclusion: In any subgame perfect equilibrium we have 1 either x = (0, . . . , 0) (group X makes no payments) 2 or mx
= VY (the smallest sum of group X’s payments to a bare majority of legislators is VY ). Under what conditions does each
case occur? If group X needs to spend more than VX to deter group Y from matching its payments to a bare majority of
legislators, then its best strategy is to make no payments (x = (0, . . . , 0)). How much does it need to spend to deter group
Y ? It needs to pay more than VY to every bare majority of legislators, so it needs to pay each legislator more than VY
/µ in which case its total payment is more than kVY /µ. Thus if VX < kVY /µ, group X is better off making no payments
than getting bill X passed by making payments large enough to deter group Y from matching its payments to a bare
majority of legislators. If VX > kVY /µ, group X can afford to make payments large enough to deter group Y from
matching. In this case its best strategy is to pay each legislator VY /µ, so that its total payment to every bare majority of
legislators is VY . Given this strategy, group Y is indifferent between matching group X’s payments to a bare majority of
legislators and making no payments. The game has no subgame perfect equilibrium in which group Y matches (the
argument is similar to the argument that the ultimatum game has no subgame perfect equilibrium in

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
249

3.8.1/ Stackelberg’s duopoly game

which person 2 rejects the offer 0). Thus in any subgame perfect equilibrium group Y makes no payments in response to
group X’s strategy.

Summing up: if VX = kVY /µ then the game has a unique subgame perfect equilibrium, in which group Y ’s strategy
is match group X’s payments to the µ legislators to whom X’s payments are smallest after a history x for which mx < VY
, and make no payments after a history x for which mx ≥ VY , and group X’s strategy depends on the relative sizes of
VX and VY : if VX < kVY /µ then group X makes no payments; if VX > kVY /µ then group X pays each legislator VY
/µ. If VX < kVY /µ then the outcome is that neither group makes any payment, and bill Y is passed; if VX > kVY /µ then
the outcome is that group X pays each legislator VY /µ, group Y makes no payments, and bill X is passed. (If VX = kVY
/µ then the analysis is more complex.).

Features of the subgame perfect equilibrium: 1 The outcome favors the secon-mover in the game (group Y
): group X manages to get the bill X passed only if Vx > kVy /µ, which is close to 2VY when k is large. 2 Group Y never
makes any payments! According to its equilibrium strategy it is prepared to make payments in response to certain
strategies of group X, but given group X’s equilibrium strategy, it spends not a cent 3 If group X makes any payments
(as it does in the equilibrium for VX > kVY /µ) then it makes a payment to every legislator.

Ticktacktoe and Chess: Ticktacktoe, chess, and related games may be modeled as extensive games with perfect
information. A history is a sequence of moves and each player prefers to win than to tie than to lose. Both ticktacktoe
and chess may be modeled as finite games, so each game has a subgame perfect equilibrium. Both games are strictly
competitive games: in every outcome, either one player loses and the other wins, or the players draw. For such games
all Nash equilibria yield the same outcome. Further, a player’s Nash equilibrium strategy yields at least her equilibrium
payoff, regardless of the other players’ strategies. Because any subgame perfect equilibrium is a Nash equilibrium, the
same is true for subgame perfect equilibrium strategies.
We conclude that in ticktacktoe and chess: 1 either one of the players has a strategy that guarantees she wins, or 2 each
player has a strategy that guarantees at worst a draw. In ticktacktoe we know that (2) is true. Chess is more subtle: it is
not known whether White has a strategy that guarantees it wins, or Black has a strategy that guarantees it wins, or each
player has a strategy that guarantees at worst a draw. The empirical evidence suggests that Black does not have a winning
strategy, but this result has not been proved. When will a subgame perfect equilibrium of chess be found?

Martin J. Osborne, An introduction to game theory, International Edition, 2012,Oxford University Press,USA.
University Carlos III of Madrid Departament of Economics
Game Theory: Problem set 5: Bayesian Games. Solutions.

Problem 1: Consider a Cornout duopoly which operates in a market with the following inverse demand
function (
90 − Q if Q ≤ 90,
P (Q) =
0 if Q > 90.
where Q = q1 + q2 is the total output in the market. The cost of firm 2 is c2 (q2 ) = 9q2 with probability
1/3 and c2 (q2 ) = 27q2 with probability 2/3. The cost of firm 1 is c1 (q1 ) = 18q1 . Firm 2 knows its own
cost, but firm 1 only knows the types of costs of firm 2 and its probabilities. The above description is
common knowledge.

1. Represent the above situation as a Bayesian Game. That is, describe the set set of players, their
types, the set of strategies, their beliefs and their utilities.
Solution: There are two players, N = {1, 2}. Their types are T1 = {r}, T2 = {cl , ch } where cl
represents the situation when firm 2 knows that its marginal cost is cl = 9 and ch represents the
situation when firm 2 knows that its marginal cost is ch = 27. The sets of strategies are S1 = [0, ∞)
and S2 = [0, ∞) × [0, ∞) = {(sl , sh ) : sl , sh ∈ [0, ∞)}. Here sl (resp. sh ) represents the strategy of
firm 2 when it knows that its marginal cost is cl = 9 (resp. ch = 27). The beliefs of the players are
1 2
p1 (cl |r) = ; p1 (ch |r) = , p2 (r|al ) = p2 (r|ah ) = 1
3 3
The utilities of the players are the following.
2 1
u1 (q1 , ql , qh |r) = (72 − q1 − qh )q1 + (72 − q1 − ql )q1
3 3
ul (q1 , ql |cl ) = (81 − q1 − ql )ql
uh (q1 , qh |ch ) = (63 − q1 − qh )qh

2. Compute the Bayesian Equilibrium and the benefits of the firms in this equilibrium.
Solution: The best response of type cl firm 2 is the solution of the following maximization problem

max ql (81 − q1 − ql )
ql

The solution is
81 − q1
ql = (0.1)
2
The best response of type ch firm 2 is the solution of the following maximization problem

max qh (63 − q1 − qh )
qh

The solution is
63 − q1
qh = (0.2)
2
The best response of firm 1 is the solution of the following maximization problem
1 2
max (72 − q1 − ql ) q1 + (72 − q1 − qh ) q1
q1 3 3
The solution is
216 − ql − 2qh
q1 = (0.3)
6

1
The Bayesian–Nash equilibrium is the solution to equations (0.1), (0.2) and (0.3). We obtain,

q1∗ = 25, ql∗ = 28, qh∗ = 19

The benefits are,


Π∗1 = 625 Π∗l = 784 Π∗h = 361

3. Suppose now that firm 1 knows that the costs of firm 2 is c2 (q2 ) = 9q2 . Compute the Nash
equilibrium and the benefits of the firms in this equilibrium.
Solution: The best response of type cl firm 2 is the solution of the following maximization problem

max ql (81 − q1 − ql )
ql

The solution is
81 − q1
ql = (0.4)
2
The best response of firm 1 is the solution of the following maximization problem

max (72 − q1 − ql ) q1
q1

The solution is
72 − ql
q1 = (0.5)
2
The Bayesian–Nash equilibrium is the solution to equations (0.4) and (0.5). We obtain,

q̄1 = 21, q̄l = 30

The benefits are,


Π̄1 = 441 Π̄l = 900

4. Suppose now that firm 1 knows that the costs of firm 2 is c2 (q2 ) = 27q2 . Compute the Nash
equilibrium and the benefits of the firms in this equilibrium.
Solution: The best response of type ch firm 2 is the solution of the following maximization problem

max qh (63 − q1 − qh )
qh

The solution is
63 − q1
qh = (0.6)
2
The best response of firm 1 is
72 − qh
q1 =
2
The Bayesian–Nash equilibrium is the solution to equations (0.4) and (0.6). We obtain,

q̃1 = 27, q̃h = 18

The benefits are,


Π̃1 = 729 Π̃h = 324

5. In view of the above computation there is one type of firm 2 which prefers complete information
and one type that prefers the situation with incomplete information. Identify those types.
Solution: Type cl of firm 2 produces ql = 28 and obtains a profit of Πl = 784 with incomplete
information and produces ql = 30 and obtains a profit of Πl = 900 with complete information. It

2
prefers the situation with complete information. This firm would benefit if it could credibly inform
firm 1 that the the cost of firm 2 is low cl .
Type ch of firm 2 produces qh = 19 and obtains a profit of Πh = 361 with incomplete information
and produces qh = 18 and obtains a profit of Πh = 324 with complete information. It prefers the
situation with incomplete information. This firm prefers that firm 1 is either not informed or (even
better) that firm 1 believes (wrongly) that the the cost of firm 2 is low cl .
Finally, firm 1 prefers the situation with incomplete information when it faces firm 2 with cost cl .
And it prefers the situation with complete information when it faces firm 2 with cost ch .

Complete information cl Incomplete information Complete information ch


q̄1 = 21 < q1∗ = 25 < q̃1 = 27
Π̄1 = 441 < Π∗1 = 625 < Π̃1 = 729
q̄l = 30 > ql∗ = 28
Π̄l = 900 > Π∗l = 784
qh∗ = 19 > q̃h = 18
Π∗h = 361 > Π̃h = 324

Problem 2: Consider a Cornout duopoly which operates in a market with the following inverse demand
function (
60 − Q if Q ≤ 60,
P (Q) =
0 if Q > 60.
where Q = q1 + q2 is the total output in the market. The cost of firm 2 is c2 (q2 ) = 12q2 with probability
1/4 and c2 (q2 ) = 24q2 with probability 3/4. The cost of firm 1 is c1 (q1 ) = 18q1 . Firm 2 knows its own
cost, but firm 1 only knows the types of costs of firm 2 and its probabilities. The above description is
common knowledge.

1. Represent the above situation as a Bayesian Game. That is, describe the set set of players, their
types, the set of strategies, their beliefs and their utilities.
Solution: There are two players, N = {1, 2}. Their types are T1 = {r}, T2 = {cl , ch } where cl
represents the situation when firm 2 knows that its marginal cost is cl = 12 and ch represents the
situation when firm 2 knows that its marginal cost is ch = 24. The sets of strategies are S1 = [0, ∞)
and S2 = [0, ∞) × [0, ∞) = {(sl , sh ) : sl , sh ∈ [0, ∞)}. Here sl (resp. sh ) represents the strategy of
firm 2 when it knows that its marginal cost is cl = 12 (resp. ch = 24). The beliefs of the players
are
1 3
p1 (cl |r) = ; p1 (ch |r) = , p2 (r|al ) = p2 (r|ah ) = 1
4 4
The utilities of the players are the following.
1 3
u1 (q1 , q2 |r) = (42 − q1 − ql )q1 + (42 − q1 − qh2)q1
4 4
u2 (q1 , ql |cl ) = (48 − q1 − ql )ql
u2 (q1 , qh |ch ) = (36 − q1 − qh )qh

2. Compute the Bayesian Equilibrium and the benefits of the firms in this equilibrium.
Solution: The best response of type cl firm 2 is the solution of the following maximization problem
max ql (48 − q1 − ql )
ql

The solution is
48 − q1
ql = (0.7)
2

3
The best response of type ch firm 2 is the solution of the following maximization problem

max qh (36 − q1 − qh )
qh

The solution is
36 − q1
qh = (0.8)
2
The best response of firm 1 is the solution of the following maximization problem
1 3
max (42 − q1 − ql ) q1 + (42 − q1 − qh ) q1
q1 4 4
The solution is
168 − ql − 3qh
q1 = (0.9)
8
The Bayesian–Nash equilibrium is the solution to equations (0.7), (0.8) and (0.9). We obtain,
33 21
q1∗ = 15, ql∗ = , qh∗ =
2 2
The benefits are,
1089 441
Π∗1 = 225 Π∗l = Π∗h =
4 4

3. Suppose now that firm 1 knows that the costs of firm 2 is c2 (q2 ) = 12q2 . Compute the Nash
equilibrium and the benefits of the firms in this equilibrium.
Solution: The best response of type cl firm 2 is the solution of the following maximization problem

max q2 (48 − q1 − q2 )
q2

The solution is
48 − q1
q2 = (0.10)
2
The best response of firm 1 is the solution of the following maximization problem

max (42 − q1 − q2 ) q1
q1

The solution is
42 − q2
q1 = (0.11)
2
The Bayesian–Nash equilibrium is the solution to equations (0.10) and (0.11). We obtain,

q̄1 = 12, q̄l = 18

The benefits are,


Π̄1 = 144 Π̄l = 324

4. Suppose now that firm 1 knows that the costs of firm 2 is c2 (q2 ) = 24q2 . Compute the Nash
equilibrium and the benefits of the firms in this equilibrium.
Solution: The best response of type ch firm 2 is the solution of the following maximization problem

max q2 (36 − q1 − q2 )
q2

The solution is
36 − q1
q2 = (0.12)
2

4
The best response of firm 1 is the same as in (0.10) .The Bayesian–Nash equilibrium is the solution
to equations (0.10) and (0.12). We obtain,
q̃1 = 16, q̃2 = 10
The benefits are,
Π̃1 = 256 Π̃h = 100

5. In view of the above computation there is one type of firm 2 which prefers complete information
and one type that prefers the situation with incomplete information. Identify those types.
Solution: Type cl of firm 2 produces ql = 33 2 and obtains a profit of Πl =
1089
4 with incomplete
information and produces ql = 18 and obtains a profit of Πl = 324 with complete information. It
prefers the situation with complete information. This firm would benefit if it could credibly inform
firm 1 that the the cost of firm 2 is low cl .
Type ch of firm 2 produces qh = 21 441
2 and obtains a profit of Πh = 4 with incomplete information
and produces qh = 10 and obtains a profit of Πh = 100 with complete information. It prefers the
situation with incomplete information. This firm prefers that firm 1 is either not informed or (even
better) that firm 1 believes (wrongly) that the the cost of firm 2 is low cl .
Finally, firm 1 prefers the situation with incomplete information when it faces firm 2 with cost cl .
And it prefers the situation with complete information when it faces firm 2 with cost ch .

Complete information cl Incomplete information Complete information ch


q̄1 = 12 < q1∗ = 15 < q̃1 = 16
Π̄1 = 144 < Π∗1 = 225 < Π̃1 = 256
q̄l = 18 > ql∗ = 33/2
Π̄l = 324 > Π∗l = 1089/4
qh∗ = 21/2 > q̃h = 10
Π∗h = 441/4 > Π̃h = 100

Problem 3: Consider a Cournot duopoly which operates in a market with inverse demand function
P (q) = a − q, where q = q1 + q2 is total output in the market. Firm 2 knows if the value of total demand
a is high (a = ah = 27) or low (a = al = 9). The value of the parameter a is uncertain for firm 1. Firm 1
believes that with probability 2/3 it could be high (a = ah = 27) and with probability 1/3 it could be low
(a = al = 9). All the above is common knowledge and both firms choose simultaneously their production
plans. Both firms have zero cost.

1. Represent this situation as a bayesian game.


Solution: There are two players, N = {1, 2}. Their types are T1 = {r}, T2 = {al , ah } where
al represents the situation when firm 2 knows that a = al = 9 and ah represents the situation
when firm 2 knows that total demand is a = ah = 27. The sets of strategies are S1 = [0, ∞) and
S2 = [0, ∞) × [0, ∞) = {(ql , qh ) : ql , qh ∈ [0, ∞)}. Here ql (resp. qh ) represents the strategy of firm 2
when it knows that the demand is al = 9 (resp. ah = 27). The beliefs are as follows,
1 2
p1 (al |r) = , p1 (ah |r) = , p2 (r|al ) = p2 (r|ah ) = 1
3 3
The utilities of the players are the following

1 2
u1 (q1 , q2 |a) = (9 − q1 − ql )q1 + (27 − q1 − qh )q1
3 3
u2 (q1 , q2 |al ) = (9 − q1 − ql )ql
u2 (q1 , q2 |ah ) = (27 − q1 − qh )qh

5
2. Compute the bayesian equilibrium and the profits in equilibrium.
Solution: The best response of type al firm 2 is the solution of the following maximization problem

max ql (9 − q1 − ql )
ql

The solution is
9 − q1
ql = (0.13)
2
The best response of type ah firm 2 is the solution of the following maximization problem

max qh (27 − q1 − qh )
qh

The solution is
27 − q1
qh = (0.14)
2
The best response of firm 1 is the solution of the following maximization problem
1 2
max (9 − q1 − ql ) q1 + (27 − q1 − qh ) q1
q1 3 3
The solution is
63 − ql − 2qh
q1 = (0.15)
6
The Bayesian–Nash equilibrium is the solution to equations (0.13), (0.14) and (0.15). We obtain,

q1∗ = 7, ql∗ = 1, qh∗ = 10

The benefits are,


Π∗1 = 49 Π∗l = 1 Π∗h = 100

3. Compare the above result with the one in which firm 1 knows the value of a.
Solution: Suppose first that firm 1 knows that the demand is a = al = 9. Then, the best response
of firm 2 is given by (0.13). The best response of firm 1 is the solution of the following maximization
problem
max (9 − q1 − q2 ) q1
q1

The solution is
9 − q2
q1 = (0.16)
2
The Bayesian–Nash equilibrium is the solution to equations (0.13) and (0.16). We obtain,

q̄1 = q̄l = 3

The benefits are,


Π̄1 = Π̄l = 9

Suppose now first that firm 1 knows that the demand is a = ah = 27. Then, the best response of
firm 2 is given by (0.14). The best response of firm 1 is the solution of the following maximization
problem
max (27 − q1 − q2 ) q1
q1

The solution is
27 − q2
q1 = (0.17)
2
The Bayesian–Nash equilibrium is the solution to equations (0.13) and (0.17). We obtain,

q̃1 = q̃h = 9

The benefits are,


Π̃1 = Π̃h = 81

6
Complete information cl Incomplete information Complete information ch
q̄1 = 3 < q1∗ = 7 < q̃1 = 9
Π̄1 = 9 < Π∗1 = 49 < Π̃1 = 81
q̄l = 30 > ql∗ = 3
Π̄l = 900 > Π∗l = 9
qh∗ = 10 > q̃h = 9
Π∗h = 100 > Π̃h = 81

Problem 4: (The battle of the sexes) A couple is deciding wether to go to the soccer match or to the
ballet. Each of the partners has to make the decision simultaneously and independently (they cannot
communicate). She (player 1) likes soccer very much, but would prefer to go with her partner. He
(player 2) enjoys ballet way more than soccer. Sometimes he prefers to go with his partner, but some
other times he prefers to go alone (you can imagine the reasons). He knows his mood tonight. But, she
does not. She thinks that the probability that he would enjoy her company tonight is 1/2. The situation
is summarized in the following tables.

He He
S B S B
She S 4,2 0,0 She S 4,0 0,4
B 0, 0 2,4 B 0, 2 2,0
He prefers her company He prefers to be alone

(a) Describe the situation as a Bayesian game.


Solution: There are two players, N = {1, 2}. Their types are T1 = {c}, T2 = {a, b} where a
represents the situation when agent 2 knows that the payoffs are those in table A and b represents the
situation when when agent 2 knows that the payoffs are those in table B. The sets of strategies are
S1 = {S, B}, S2 = {SS, SB, BS, BB}. The beliefs of the players are
1
p1 (a|c) = p1 (b|c) = , p2 (c|a) = p2 (c|b) = 1
2
The utilities of the players are given by the above tables. For example u1 (B, B|c, a) = 2, u2 (S, B|c, b) =
4.
(b) Find the Bayesian equililibria.
Solution: Note that BR2 (S) = SB, BR2 (B) = BS. And
1 1 1 1
u1 (S, SB) = ×4+ × 0 = 2, u1 (B, SB) = ×0+ ×2=1
2 2 2 2
1 1 1 1
u1 (S, BS) = ×0+ × 4 = 2, u1 (B, BS) = × 2 + ×0=1
2 2 2 2

Hence, BR1 (SB) = S, BR1 (BS) = S. Therefore, The NE in pure strategies is (S, SB).
We compute now the BNE in mixed strategies. Suppose player 1 uses the mixed strategy xS +(1−x)B,
player 2a uses the mixed strategy yS + (1 − y)B and player 2b uses the mixed strategy zS + (1 − z)B.
Note that

P2 P2
y 1−y z 1−z
S B S B
P1 x S 4,2 0,0 P1 x S 4,0 0,4
1-x B 0, 0 2,4 1-x B 0, 2 2,0
a b

7
u2 (xS + (1 − x)B, S|a) = 2x, u2 (xS + (1 − x)B, B|a) = 4(1 − x)
Hence, player 2a is indifferent between the strategies S and B if and only if 2x = 4(1 − x), that is if
and only if x = 32 . Now given this values of x,
2
u2 (xS + (1 − x)B, S|b)|x= 2 = 2(1 − x)|x= 2 =
3 3 3
8
u2 (xS + (1 − x)B, B|b)|x= 2 = 4x|x= 2 =
3 3 3
Hence,
2 1
BR2 ( S + B|a) = {B, S}
3 3
2 1
BR2 ( S + B|b) = B
3 3
In other words,
2 1
BR2 ( S + B) = {BB, SB}
3 3
That is, if player 1 follows the strategy 23 S + 13 B player 2a is indifferent between S and B and player 2b
best response is B. Thus, we look for a BNE of the form
 
2 1
S + B, (yS + (1 − y)B, B)
3 3
If player 2 follows that above strategy, the expected payoffs of player 1 are
1 1
u1 (S, (yS + (1 − y)B, B)) = (4y + 0(1 − y)) + × 0 = 2y
2 2
1 1
u1 (B, (yS + (1 − y)B, B)) = (0y + 2(1 − y)) + ×2=2−y
2 2
Hence, player 1 is indifferent between strategies S and B if and only if 2y = 2 − y, that is if and only
if y = 32 . It is now easy to check that the strategy
  
2 1 2 1
S + B, S + B, B
3 3 3 3
is BNE. The (expected) payoffs of the players are
  
2 1 2 1 4
u1 S + B, S+ B, B =
3 3 3 3 3
  
2 1 2 1
u2 S + B, S+ B, B = 2
3 3 3 3
We look now for another BNE in which player 2b uses mixed strategies. Note that

u2 (xS + (1 − x)B, S|b) = 2(1 − x), u2 (xS + (1 − x)B, B|b) = 4x

Hence, player 2b is indifferent between the strategies S and B if and only if 2(1 − x) = 4x, that is if
and only if x = 31 . Now given this values of x,
2
u2 (xS + (1 − x)B, S|a)|x= 1 = 2x|x= 1 =
3 3 3
8
u2 (xS + (1 − x)B, B|a)|x= 1 = 4(1 − x)|x= 1 =
3 3 3
Hence,
1 1
BR2 ( S + B|a) = B
3 3
1 2
BR2 ( S + B|b) = {B, S}
3 3

8
In other words,
1 1
BR2 ( S + ) = {BB, BS}
3 3
1 2
That is, if player 1 follows the strategy 3 S + 3 B player 2b is indifferent between S and B and player 2a
b’s best response is B. Thus, we look for a BNE of the form
 
1 2
S + B, (B, zS + (1 − z)B)
3 3
If player 2 follows that above strategy, the expected payoffs of player 1 are
1 1
u1 (S, (B, zS + (1 − z)B)) = (4y + 0(1 − y)) + × 0 = 2z
2 2
1 1
u1 (B, (B, zS + (1 − z)B)) = (0y + 2(1 − y)) + ×2=2−z
2 2
Hence, player 1 is indifferent between strategies S and B if and only if 2z = 2 − z, that is if and only
if z = 32 . It is now easy to check that the strategy
  
1 2 2 1
S + B, B, S + B
3 3 3 3
is BNE. The (expected) payoffs of the players are
  
1 2 2 1 4
u1 S + B, B, S + B =
3 3 3 3 3
  
1 2 2 1
u2 S + B, B, S + B = 2
3 3 3 3

Solution 2: Another way to find this is to note that,


1 1 1 1
u1 (S, SS) = ×4+ × 4 = 4, u1 (B, SS) = ×0+ ×0=0
2 2 2 2
1 1 1 1
u1 (S, SB) = ×4+ × 0 = 2, u1 (B, SB) = × 0 + × 2 = 1
2 2 2 2
1 1 1 1
u1 (S, BS) = ×0+ × 4 = 2, u1 (B, BS) = × 2 + × 0 = 1
2 2 2 2
1 1 1 1
u1 (S, BB) = ×0+ × 0 = 0, u1 (B, BB) = × 2 + × 2 = 2
2 2 2 2

and
1 1 1 1
u2 (S, SS) = ×2+ × 0 = 1, u2 (B, SS) = ×0+ ×2=1
2 2 2 2
1 1 1 1
u2 (S, SB) = ×2+ × 4 = 3, u2 (B, SB) = × 0 + × 0 = 0
2 2 2 2
1 1 1 1
u2 (S, BS) = ×0+ × 0 = 0, u2 (B, BS) = × 4 + × 2 = 3
2 2 2 2
1 1 1 1
u2 (S, BB) = ×0+ × 4 = 2, u2 (B, BB) = × 4 + × 0 = 2
2 2 2 2

Now, we construct the table

P2
SS SB BS BB
P1 S 4,1 2,3 2,0 0,2
B 0,1 1,0 1,3 2,2
Expected payoffs

9
Hence, the BNE in pure strategies is (S, SB).
We find now the BNE in mixed strategies. Suppose again that player 1 uses the mixed strategy
xS + (1 − x)B, player 2a uses the mixed strategy yS + (1 − y)B and player 2b uses the mixed strategy
zS + (1 − z)B. Then

p(SS) = yz, p(SB) = y(1 − z), p(BS)(1 − y)z, p(BB) = (1 − y)(1 − z)

So, we get the table

P2
yz y(1 − z) (1 − y)z (1 − y)(1 − z)
SS SB BS BB
P1 x S 4,1 2,3 2,0 0,2 2 (y + z)
1-x B 0,1 1,0 1,3 2,2 2-y-z
1 3x 3(1-x) 2
Expected payoffs

Graphically,

u2

u2 = 3(1 − x) u2 = 3x

u2 = 2

u2 = 1

1 2
1 x
3 3

We see that 
BS

 if 0 ≤ x < 13
{BS, BB} if x = 13



1 2
BR2 (x) = BB if 3 <x< 3
x = 23

{SB, BB} if




x > 23

SB if

For 0 < x < 31 , the best reply of player 2 is BS and BR1 (BS) = S. But, BR2 (S) = SB. Hence,
there is no BNE with 0 < x < 13 .
For x = 13 , the best reply of player 2, type a is B and player 2, type b is indifferent between S and
B. Thus, we must have y = 0. Player 1 follows a mixed strategy only if 2y + 2z = 2 − y − z. Since,
y = 0 this is equivalent to 2z = 2 − z, that is z = 23 . We obtain the BNE
  
1 2 2 1
S + B, B, S + B .
3 3 3 3

10
1
For 3 < x < 23 , the best reply of player 2 is BB. Since,
1 1 1 1
u1 (S, BB) = × 0 + × 0 = 0, u1 (B, BB) = ×2+ ×2=2
2 2 2 2

1
we have that BR1 (BB) = B. But, BR2 (B) = BS. Hence, there is no BNE with 3 < x < 23 .
For x = 23 , the best reply of player 2, type b is B and player 2, type a is indifferent between S and
B. Thus, we must have z = 0. Player 1 follows a mixed strategy only if 2y + 2z = 2 − y − z. Since,
y = 0 this is equivalent to 2y = 2 − y, that is y = 32 . We obtain the BNE
  
2 1 2 1
S + B, S + B, B
3 3 3 3

Finally, for 32 < x ≤ 1, the best reply of player 2 is SB. We have that BR1 (SB) = S, that is x = 1.
Also, BR2 (S) = SB and we recover the BNE (S, SB).

Problem 5: Consider the situation in which player 2 knows what game is played (A or B below). But
player 1 only knows that A is played with probability p and B is played with probability 1 − p.

Player 2 Player 2
S B S B
Player 1 S 2,2 1,0 Player 1 S 2,2 0,5
B 1, 5 0,2 B 0, 0 4,2
A B

(a) Describe the situation as a Bayesian game.


Solution: There are two players, N = {1, 2}. Their types are T1 = {c}, T2 = {a, b} where a
represents the situation when agent 2 knows that the payoffs are those in table A and b represents the
situation when when agent 2 knows that the payoffs are those in table B. The sets of strategies are
S1 = {S, B}, S2 = {SS, SB, BS, BB}. The beliefs of the players are
p1 (a|c) = p, p (b|c) = 1 − p, p2 (c|a) = p2 (c|b) = 1
The utilities of the players are given by the above tables. For example u1 (B, B|c, a) = 0, u2 (S, B|c, b) =
5.
(b) Find the Bayesian equililibria.
Solution: Note that strategy B is dominated by strategy S for player 2a and strategy S is dominated
by strategy B for player 2b. Hence, the BNE are of the form
(xS + (1 − x)B, SB) , x ∈ [0, 1]
We remark now that
u1 (S, SB) = 2p, u1 (B, SB) = 4 − 3p
4
Since 2p ≥ 4 − 3p if and only if p ≥ we have that
5

4
S,
 if p > 5
4
BR1 (SB) = {S, B}, if p = 5
4

B, if p <

5

Therefore the NE are


4
(S, SB), if p > 5
4
(xS + (1 − x)B, SB) with x ∈ [0, 1], if p = 5
4
(B, SB), if p < 5

11
Problem 6: Consider the situation in which player 1 knows what game is played (A or B below). But
player 2 only knows that A is played with probability 1/3 and B is played with probability 2/3.

Player 2 Player 2
S B S B
Player 1 S 1,1 0,0 Player 1 S 0,0 0,0
B 0, 0 0,0 B 0, 0 2,2
A B

(a) Describe the situation as a Bayesian game.


Solution: There are two players, N = {1, 2}. Their types are T1 = {a, b}, T2 = {c} where a
represents the situation when agent 1 knows that the payoffs are those in table A and b represents the
situation when when agent 1 knows that the payoffs are those in table B. The sets of strategies are
S1 = {SS, SB, BS, BB}, S2 = {S, B}. The beliefs of the players are
1 2
p1 (c|a) = p1 (c|b) = 1, p2 (a|c) = , p (b|c) =
3 3
The utilities of the players are given by the above tables.
(b) Find the Bayesian equililibria.
Solution: Note that,
1 2 1 1 2
u1 (SS, S) = ×1+ ×0= , u1 (SS, B) =×0+ ×0=0
3 3 3 3 3
1 2 1 1 2 4
u1 (SB, S) = ×1+ ×0= , u1 (SB, B) = × 0 + × 2 =
3 3 3 3 3 3
1 2 1 2
u1 (BS, S) = ×0+ × 0 = 0, u1 (BS, B) = × 0 + × 0 = 0
3 3 3 3
1 2 1 2 4
u1 (BB, S) = ×0+ × 0 = 0, u1 (BB, B) = × 0 + × 2 =
3 3 3 3 3

and
1 2 1 1 2
u2 (SS, S) = ×1+ ×0= , u2 (SS, B) =×0+ ×0=0
3 3 3 3 3
1 2 1 1 2 4
u2 (SB, S) = ×1+ ×0= , u2 (SB, B) = × 0 + × 2 =
3 3 3 3 3 3
1 2 1 2
u2 (BS, S) = ×0+ × 0 = 0, u2 (BS, B) = × 0 + × 0 = 0
3 3 3 3
1 2 1 2 4
u2 (BB, S) = ×0+ × 0 = 0, u2 (BB, B) = × 0 + × 2 =
3 3 3 3 3

Now, we construct the table

P2
S B
1 1
SS 3,3 0,0
1 1 4 4
SB 3,3 3,3
P1
BS 0,0 0,0
4 4
BB 0,0 3,3

Expected payoffs
An we look for a NE of the form
σ1 = xSS + ySB + zBS + (1 − x − y − z)BB
σ2 = qS + (1 − q)B
We get the table

12
P2
q 1 -q
S B
1 1 q
x SS 3,3 0,0 3
1 1 4 4 4
y SB 3,3 3,3 3 −q
P1
z BS 0,0 0,0 0
1-x-y-z BB 0,0 4 4
3,3
4
3 − 4q
3
x+y 4
3 3 (1 − x − z)
Expected payoffs

Graphically,

u2

4
u1 (SB, σ2 ) = 3 −q
4 4q
u1 (BB, σ2 ) = 3 − 3 q
u1 (SS, σ2 ) = 3

4
1 q
5

We see that 
{SB, BB}
 if q = 0
BR1 (q) = {SB} if 0 < q < 1

{SS, SB} if q = 1

We may assume z = 0. We obtain, the table

P2
q 1 -q
S B
1 1 q
x SS 3,3 0,0 3
1 1 4 4 4
3 −q
P1 y SB 3,3 3,3
4 4 4 4q
1-x-y BB 0,0 3,3 3 − 3
x+y 4
3 3 (1 − x)
Expected payoffs

• Is there a BNE with q = 0? In this equilibrium we must have that x = 0. We obtain, the table
P2
q=0 1−q =1
S B
1 1 4 4 4
y SB 3,3 3,3 3
P1 4 4 4
1-y BB 0,0 3,3 3
y 4
3 3

Expected payoffs

13
We obtain the BNE
(ySB + (1 − y)BB, B) 0≤y≤1
with payoffs u1 = u2 = 34 .
• Is there a BNE with 0 < q < 1? In this equilibrium we must have that y = 1. But, then
BR2 (SB) = B. That is, player 2 best reply to SB is to choose q = 0. Hence, there is no BNE
with 0 < q < 1.
• Is there a BNE with q = 1? In this equilibrium we must have that x + y = 1. We obtain, the
table
P2
q=1 1−q =0
S B
1 1 1
x SS 3,3 0,0 3
P1 1 1 4 4 1
1-x SB 3,3 3,3 3
1 4
3 3 (1 − x)
Expected payoffs
We need that
1 4
≥ (1 − x)
3 3
that is x ≥ 43 . We obtain the following BNE

3
(xSS + (1 − x)SB, S) ≤x≤1
4
with payoffs u1 = u2 = 31 .

Problem 7: Two individuals consider donating towards a public good. If any of the agents contributes
to the public good, then it is implemented. If agent agent i = 1, 2 contributes to the public good his
utility is ui = 2 − ci . If agent i = 1, 2 does not contribute to the public good, but agent j 6= i contributes
to the public, the utility of agent i is ui = 2. It is known that c1 = 1. Only agent 2 knows c2 . Agent 1
knows that c2 = 1 with probability p and c2 = 3 with probability 1 − p. That is player 2 knows what
game is played (A or B below). But player 1 only knows that A is played with probability p and B is
played with probability 1 − p.

Player 2 Player 2
C N C N
Player 1 C 1,1 1,2 Player 1 C 1,-1 1,2
N 2, 1 0,0 N 2, −1 0,0
A B

(a) Describe the situation as a Bayesian game.


Solution: There are two players, N = {1, 2}. Their types are T1 = {c}, T2 = {a, b} where a
represents the situation when agent 2 knows that the payoffs are those in table A and b represents the
situation when when agent 2 knows that the payoffs are those in table B. The sets of strategies are
S1 = {C, N }, S2 = {CC, CN, N C, N N }. The beliefs of the players are

p1 (a|c) = p, p (b|c) = 1 − p, p2 (c|a) = p2 (c|b) = 1

The utilities of the players are given by the above tables. For example u1 (C, N |c, a) = 1, u2 (N, C|c, b) =
−1.

14
(b) Find the Bayesian equililibria.
Solution: Note that strategy C is dominated by strategy N for player 2b. Hence, the BNE are of the
form (∗, ∗N ). We remark that BR2 (C|a) = N , BR2 (N |a) = C. So, BR2 (C) = N N , BR2 (N ) = CN .
On the other hand,
u1 (C, N N ) = 1, u1 (N, N N ) = 0
So, BR1 (N N ) = C. Hence, (C, N N ) is a BNE, for any 0 ≤ p ≤ 1. Note also that

u1 (C, CN ) = 1, u1 (N, (CN ) = 2p



C
 if p < 1/2
BR1 (CN ) = {C, N } if p = 1/2

N if p > 1/2

and we see that if p ≥ 1/2, then (N, CN ) is a BNE with payoffs u1 = 1, u2 = 2. We look now for
a mixed strategy BNE. Suppose player 1 uses the mixed strategy σ1 = xC + (1 − x)N and player 2a
uses the mixed strategy σ2 = yC + (1 − y)N . Then, the expected payoffs are,

u1 (C, σ2 ) = 1
u1 (N, σ2 ) = 2yp

and

u2 (σ1 , CN |a) = 1
u2 (σ1 , N N |a) = 2x

So, x = 21 . Hence, for p ≥ 12 , there is a mixed strategy BNE


  
1 1 1 2p − 1
C + N, C+ N, N
2 2 2p 2p
with payoffs u1 = u2 = 1.

Problem 8: Two individuals consider donating towards a public good. If any of the agents contributes
to the public good, then it is implemented. If agent agent i = 1, 2 contributes to the public good his
utility is ui = 2 − ci . If agent i = 1, 2 does not contribute to the public good, but agent j 6= i contributes
to the public, the utility of agent i is ui = 2. That is

Player 2
C N
Player 1 C 2 − c1 , 2 − c2 2 − c1 ,2
N 2, 2 − c2 0,0

where c1 and c2 are distributed uniformly on the interval [1, 3].

(a) Describe the situation as a Bayesian game.


Solution: There are two players, N = {1, 2}. Their types are T1 = T2 = [1, 3]. The sets of strategies
are S1 = S2 = {C, N }. The beliefs of the players are

0
 if c < 1
c−1
p1 (c2 ≤ c|c1 ) = p2 (c1 ≤ c|c2 ) = 2 if c ∈ [1, 3] ,

1 if c > 3

The utilities of the players are given by the above tables.

15
(b) Show that there is a Bayesian equilibrium of the form (s1 (c1 ), s2 (c2 )) with
(
C if ci ≤ a
si (ci ) = , i = 1, 2
N if ci > a

for some a ∈ [1, 3]: Solution: The expected utility of agent 1 with the proposed strategy is

u1 (C, s2 (c2 )) = (2 − c1 )p (c2 ≤ a) + (2 − c1 )p (c2 > a) = 2 − c1


2(a − 1)
u1 (N, s2 (c2 )) = 2p (c2 ≤ a) + 0 × p (c2 > a) = =a−1
2

Let ε ≥ 0 and let c1 = a − ε. For agent 1, strategy C is a best reply iff 2 − c1 ≥ a − 1, that is iff
2 − a + ε ≥ a − 1. Hence, we need that
3 ε
a≤ + for any ε ≥ 0
2 2
Now let c1 = a + ε. For agent 1 strategy D is a best reply iff 2 − c1 ≤ a − 1, that is iff 2 − a − ε ≤ a − 1.
Hence, we need that
3 ε
− ≤ a for any ε ≥ 0
2 2
We see that if a = 23 , then the strategy
(
3
C if ci ≤ 2
s1 (c1 ) = 3
,
N if ci > 2

is a best reply to s2 (c2 ). A similar argument shows that s2 (c2 ) is a best reply for agent 2 to s1 (c1 ).
The payoffs in this equilibrium are
(
2 − ci if ci ≤ 23
ui (s1 (c1 ), s2 (c2 ); ci ) = 1 , i = 1, 2
2 if ci > 32

Problem 9:

Two risk averse individuals with utility functions u(x) = x, where x represents money, face a first
price auction. Agent i = 1, 2 (i = 1, 2) values the good in vi monetary units. This valuation is private
information, but it is known that the vi ’s are random variables independently and uniformly distributed
in the interval [0, 1].

(a) Describe the situation as a Bayesian game.


Solution: There are two players, N = {1, 2}. Their types are T1 = T2 = [0, 1]. The sets of strategies
are Si (vi ) = [0, vi ] i = 1, 2. The beliefs of the players are

o
 if c < 0
p1 (v2 ≤ c|v1 ) = p2 (v1 ≤ c|v2 ) = c if c ∈ [0, 1] ,

1 if c > 1

The utilities of the players are



0√
 if bi < bj
vi −bi
ui (b1 , b2 ; vi ) = if bi = bj i = 1, 2, i 6= j
√ 2
v i − bi if bi > bj

16
(b) Find a bayesian Nash equilibrium of the form bi (vi ) = αi vi . What is the utility of each individual in
equilibrium?
Solution: Suppose player 2 follows the strategy b2 (v2 ) = α2 v2 . If player 1 chooses to bid b1 , his
expected utility is
p 1p
u1 (b1 |v1 ) = p(b1 > b2 (v2 )) v1 − b1 + p(b1 = b2 (v2 )) v1 − b1 + 0 × p(b1 (v1 ) < b2 (v2 ))
p 2
= v1 − b1 p(b1 > b2 (v2 ))

because p(b1 = b2 (v2 )) = 0. Thus,


  √
p p b1 p b1 v 1 − b1
u1 (b1 |v1 ) = p(b1 > b2 (v2 )) v1 − b1 = p(b1 > α2 v2 ) v1 − b1 = p v2 < v 1 − b1 =
α2 α2

The best reply of player 1 is given my the solution to


p
max b1 v1 − b1
b1

The first order condition is


p b1
v 1 − b1 = √
2 v1 − b1
whose solution is
2
b1 (v1 ) = v1
3
Similarly, if player 1 follows the strategy b1 (v1 ) = α1 v1 the best reply for player 2 is
2
b2 (v2 ) = v2
3
Hence
2
bi (vi ) = vi , i = 1, 2
3
is BNE. The expected payoff of agent i = 1, 2 is
q
2
3 vi vi − 23 vi
r
2 32
ui (b1 (v1 ), b2 (v2 ); vi ) = = v
2
3
3 i

17

You might also like