mcnotes61
mcnotes61
\Proof" for the case n = 1: Show rst that A must be a closed interval, [a; b]. De ne
g(x) f (x) x: We have g(a) 0 and g(b) 0. Since g is a continuous function, then there
must exist a point c 2 [a; b] at which g(c) = 0. It is a xed point of f (x):
Remarks:
1
(a) A correspondence F : X ! Y is upper hemicontinuous (uhc) at x 2 X if F (x) is
non-empty and if for every sequence fxn g 2 X, every sequence fyn g with yn 2 F (xn )
and every y 2 Y; we have that xn ! x and yn ! y implies y 2 F (x):
(b) A correspondence F : X ! Y is lower hemicontinuous (lhc) at x if F (x) is
non-empty and if for every y 2 F (x) and every sequence fxn g; s.t. xn ! x; 9fxnk g,
a subsequence of fxn g and yk 2 F (xnk ) such that yk ! y.
(c) A correspondence is continuous if it is both lower and upper hemicontinuous.
* Intuitively: uhc means that the set F (x) can \expand with a jump" at x but cannot
\shrink with a jump" (draw an example). Lhc is the opposite.
Theorem 62 (Kakutani's xed point theorem)
Let S be non-empty, compact, convex subset of Rn . Let F be an upper hemicontin-
uous correspondence from S into 2S (2S is called the power set of S and denotes
the set of all subsets of S) such that 8x 2 S the set F (x) is non-empty, closed and
convex. Then there exists x 2 S s.t. x 2 F (x ) { a xed point of F:
8
< 0:6 if 0 x < 0:5
Counterexample: take f (x) = f0:6; 0:4g if x = 0:5 satis es all conditions apart from
:
0:4 if 0:5 < x 1
f (x) being convex at x = 0:5
In the above de nition T x denotes the image of x after applying the mapping T:
Example:
Take S = [a; b] 2 R with (x; y) = jx yj: Then T : S ! S is a CM if for some 2 (0; 1)
jT x T yj
< < 1 for all x; y 2 S; x 6= y
jx yj
i.e. for example if T is a function with slope uniformly less than 1. A point x satisfying
T x = x is called a xed point of T:
2
Theorem 63 (Contraction mapping theorem)
If (S; ) is a complete metric space (such that every convergent sequence in it con-
verges to a point in it) and T : S ! S is a contraction mapping with modulus
then:
(a) T has a unique xed point in S; denoted v
(b) For any v0 2 S; (T n v0 ; v) n
(v0 ; v); n = 1; 2; :::; where T n x means
applying T n times on x; i.e. T n+1 x = T (T n x):
Note that (b) bounds the distance between the n th approximation and the xed point.
However, if v is not known this bound is unknown too and thus useless. Instead we can show
that for any v0 2 S
1
(T n v0 ; v) (T n v0 ; T n+1 v0 )
1
Corollary 1
Let (S; ) be a complete metric space and T : S ! S be a CM with a xed point
v 2 S: If S 0 is a closed subset of S and T (S 0 ) S 0 then v 2 S 0 .
The above statement tells us that if T shrinks the set S when applied repeatedly the xed
point stays inside the resulting set. This result is very useful for some game theoretical appli-
cations.
Corollary 2 (N-stage contraction theorem)
Let (S; ) be a complete metric space, T : S ! S and suppose that for some integer
N 1, T N : S ! S is a CM with modulus where T N means applying T N times.
Then:
(a) T has exactly one xed point in S; v
(b) For any v0 2 S; (T kN v0 ; v) k
(v0 ; v); k = 0; 1; :::
The above theorems do not tell us however how we can nd those contraction mappings,
i.e. how to check if an operator T is a CM. The following result provides su cient conditions.
d
* Example: exp( x) is not a CM on R+ but exp( exp( x)) is. Why? j dx exp( exp( x))j =
j exp( x exp( x))j < e0 = 1
Theorem 64 (Blackwell's conditions)
Let X Rl and B(X) be a space of bounded functions f : X ! R with the sup
norm (i.e. jjf jj = supt2X jf (t)j): Let T : B(X) ! B(X) be an operator satisfying:
(i) monotonicity: if f; g 2 B(X) and f (x) g(x) 8x 2 X then (T f )(x)
(T g)(x); 8x 2 X:
(ii) discounting: 9 2 (0; 1) s.t. (T (f + a))(x) (T f )(x) + a; 8f 2 B(X);
8a 0; 8x 2 X; where (f + a)(x) is the function de ned by (f + a)(x) = f (x) + a:
Then T is a contraction mapping with modulus :
3
1.2 Dynamic programming
1.2.1 Preliminaries
We will be interested in problems of the type (those arise all the time in macroeconomics):
X
1
t
sup F (xt ; xt+1 )
fxt+1 g1
t=0 t=0
...etc.
Note that continuing in this recursive way the subscripts become irrelevant as all we care about
is what the current-period value xt is. That is, the maximization problem is exactly the same
at any period (e.g., see Example below, still a cake has to be eaten in in nitely many remaining
periods by the exact same consumer) { the only thing that changes is the initial value. Hence,
intuitively we can re-write the original (SP) problem as a \generic" problem with current value
x in which we search for the function v that satis es:
The above is an equation in the unknown function v and is called the Functional Equation
(FE) while v which satis es it is called the value function. Dynamic programming deals with
solving dynamic optimization problems using FEs.
We will study the relationship between the solutions to the (SP) and the (FE) and develop
methods to analyze the latter. Let us start with some important de nitions.
De nition (Graph)
4
Let : X ! Y be a correspondence and de ne the set,
A = f(x; y) : y 2 (x)g
* Example: take x 2 [0; 1] and let (x) = [0; x]: Then the graph of is the area below the
45-degree line between x = 0 and x = 1 in a two-dimensional graph with x on the horizontal
axis and values from (x) on the vertical axis.
Notice that the above theorem provides conditions under which a value function would be
continuous and under which the set G(x) of maximizers of f will be non-empty and compact
valued. The following results are related to the theorem of the maximum.
Corollary
These ideas, rst stated by Bellman are known as the Principle of Optimality. In the
discussion below we will study the conditions under which the Principle of Optimality holds.
Let us start by de ning the notation and terminology which will be used in this section:
Notation:
5
{ X is the set of all possible values for the so-called state variable x and is called the
state space.
{ : X ! X is a correspondence describing the feasibility constraints or the feasible
set.
{ A = f(x; y) 2 X X : y 2 (x)g is the graph of .
{ F : A ! R is called the return function.
{ 0 is called the discount factor.
Thus X; ; F and are the givens in our problem. First, we need to establish conditions
under which (SP) is well-de ned, i.e. the feasible set is non-empty and the objective function
is well de ned for all points in the feasible set. Call a sequence fxt g1 t=0 in X a plan. Given
x0 2 X let
(x0 ) = ffxt g1
t=0 : xt+1 2 (xt ); t = 0; 1:::g
Next, we need to ensure that the objective function is well de ned. We make the following:
P
n
t
Assumption A2: For all x0 2 X and all x~ 2 (x0 ), limn!1 F (~
xt ; x~t+1 ) exists.
t=0
There are many ways to satisfy A2, the simplest is to assume that F is bounded (for this
it may help to assume/know that X is bounded) and 2 (0; 1). For each n = 0; 1; :: de ne
un : (x0 ) ! R as:
Pn
t
un (~
x) F (~
xt ; x~t+1 )
t=0
that is, un is the partial sum from a feasible plan x~. Using A2 we can then also de ne:
u(~
x) lim un (~
x)
n!1
Since under A1-A2 the objective function is well-de ned and the set of feasible plans is non-
empty, then we can de ne the supremum function, v : X ! R by:
6
De nition (\satis es the FE")
Suppose jv (x0 )j < 1. We say that v \satis es the (FE)" if the following conditions hold:
(i) 8y 2 (x0 )
v (x0 ) F (x0 ; y) + v (y)
(ii) 8" > 0, 9y 2 (x0 ) so that,
We are now ready for our rst result on the relationship between v in (SP) as de ned above
and the function v in (FE).