Note2c MathDynamicProgramming
Note2c MathDynamicProgramming
Mathematical Preliminaries
where r is the period return function (such as the utility function) and is the
constraint set. Note that for the neoclassical growth model x = k; y = k 0 and
F (k; k 0 ) = U (f (k) k 0 ) and (k) = fk 0 2 R :0 k f (k)g
In order to so we de…ne the following operator T
This operator T takes the function v as input and spits out a new function T v:
In this sense T is like a regular function, but it takes as inputs not scalars z 2 R
or vectors z 2 Rn ; but functions v from some subset of possible functions. A
solution to the functional equation is then a …xed point of this operator, i.e. a
function v such that
v = Tv
We want to …nd out under what conditions the operator T has a …xed point
(existence), under what conditions it is unique and under what conditions we
can start from an arbitrary function v and converge, by applying the operator T
1
repeatedly, to v : More precisely, by de…ning the sequence of functions fvn gn=0
recursively by v0 = v and vn+1 = T vn we want to ask under what conditions
limn!1 vn = v :
In order to make these questions (and the answers to them) precise we have
to de…ne the domain and range of the operator T and we have to de…ne what
we mean by lim : This requires the discussion of complete metric spaces. In the
next subsection I will …rst de…ne what a metric space is and then what makes
a metric space complete.
Then I will state and prove the contraction mapping theorem. This theorem
states that an operator T; de…ned on a metric space, has a unique …xed point if
71
72 CHAPTER 4. MATHEMATICAL PRELIMINARIES
1. d(x; y) 0
2. d(x; y) = 0 if and only if x = y
3. d(x; y) = d(y; x)
4. d(x; z) d(x; y) + d(y; z)
The function d is called a metric and is used to measure the distance between
two elements in S: The second property is usually referred to as symmetry,
the third as triangle inequality (because of its geometric interpretation in R
Examples of metric spaces (S; d) include1
1 if x 6= y
Example 17 S = R with metric d(x; y) =
0 otherwise
1
Example 18 S = l1 = fx = fxgt=0 jxt 2 R; all t 0 and supt jxt j < 1g with
metric d(x; y) = supt jxt yt j
1 if x 6= y
Claim 20 S = R with metric d(x; y) = is a metric space
0 otherwise
Proof. We have to show that the function d satis…es all three properties in
the de…nition. The …rst three properties are obvious. For the forth property: if
x = z; the result follows immediately. So suppose x 6= z: Then d(x; z) = 1: But
then either y 6= x or y 6= z (or both), so that d(x; y) + d(y; z) 1
Since this is true for all t; we can apply the sup to both sides to obtain the
result (note that the sup on both sides is …nite).
Proof. Take arbitrary f; g 2 C(X): f = g means that f (x) = g(x) for all
x 2 X: Since f; g are bounded, supx2X jf (x)j < 1 and supx2X jf (x)j < 1; so
supx2X jf (x) g(x)j < 1: Property 1. through 3. are obvious and for property
4. we use the same argument as before, including the fact that f; g 2 C(X)
implies that supx2X jf (x) g(x)j < 1:
For easy examples of sequences it is no problem to guess the limit. Note that
the limit of a sequence, if it exists, is always unique (you should prove this for
yourself). For not so easy examples this may not work. There is an alternative
criterion of convergence, due to Cauchy.2
So it turns out that the sequence in the last example both converges and is a
Cauchy sequence. This is not an accident. In fact, one can prove the following
Theorem 27 Suppose that (S; d) is a metric space and that the sequence fxn g1
n=0
converges to x 2 S: Then the sequence fxn g1
n=0 is a Cauchy sequence.
"
Proof. Since fxn g1 n=0 converges to x; there exists M 2 such that d(xn ; x) < 2
"
1 if x 6= y
Example 28 Take S = R with d(x; y) = : De…ne fxn g1n=0 by
0 otherwise
xn = n1 . Obviously d(xn ; xm ) = 1 for all n; m 2 N: Therefore the sequence is
not a Cauchy sequence. It then follows from the preceding theorem (by taking
the contrapositive) that the sequence cannot converge. This example shows that,
whenever discussing a metric space, it is absolutely crucial to specify the metric.
This theorem tells us that every convergent sequence is a Cauchy sequence.
The reverse does not always hold, but it is such an important property that
when it holds, it is given a particular name.
De…nition 29 A metric space (S; d) is complete if every Cauchy sequence fxn g1
n=0
with xn 2 S for all n converges to some x 2 S:
Note that the de…nition requires that the limit x has to lie within S: We are
interested in complete metric spaces since the Contraction Mapping Theorem
deals with operators T : S ! S; where (S; d) is required to be a complete metric
space. Also note that there are important examples of complete metric spaces,
but other examples where a metric space is not complete (and for which the
Contraction Mapping Theorem does not apply).
Example 30 Let S be the set of all continuous, strictly decreasing functions
on [1; 2] and let the metric on S be de…ned as d(f; g) = supx2[1;2] jf (x) g(x)j:
I claim that (S; d) is not a complete metric space. This can be proved by an
example of a sequence of functions ffn g1
n=0 that is a Cauchy sequence, but does
1
not converge within S: De…ne fn : [0; 1] ! R by fn (x) = nx : Obviously all fn
are continuous and strictly decreasing on [1; 2]; hence fn 2 S for all n: Let us
…rst prove that this sequence is a Cauchy sequence. Fix " > 0 and take N" = 2" :
Suppose that m; n N" and without loss of generality assume that m > n: Then
1 1
d(fn ; fm ) = sup
x2[1;2] nx mx
1 1
= sup
x2[1;2] nx mx
m n
= sup
x2[1;2] mnx
n
m n 1 m
= =
mn n
1 1 "
= <"
n N" 2
Hence the sequence is a Cauchy sequence. But since for all x 2 [1; 2]; limn!1 fn (x) =
0; the sequence converges to the function f; de…ned as f (x) = 0; for all x 2 [1; 2]:
But obviously, since f is not strictly decreasing, f 2= S: Hence (S; d) is not a
complete metric space. Note that if we choose S to be the set of all continu-
ous and decreasing (or increasing) functions on R; then S; together with the
sup-norm, is a complete metric space.
76 CHAPTER 4. MATHEMATICAL PRELIMINARIES
qP
L
Example 31 Let S = RL and d(x; y) = L l=1 jxl yl jL : (S; d) is a complete
metric space. This is easily proved by proving the following three lemmata (which
is left to the reader).
Example 32 This last example is very important for the applications we are
interested in. Let X RL and C(X) be the set of all bounded continuous
functions f : X ! R with d being the sup-norm. Then (C(X); d) is a complete
metric space.
Proof. (This follows SLP, pp. 48) We already proved that (C(X); d) is a
metric space. Now we want to prove that this space is complete. Let ffn g1 n=0
be an arbitrary sequence of functions in C(X) which is Cauchy. We need to
establish the existence of a function f 2 C(X) such that for all " > 0 there
exists N" satisfying supx2X jfn (x) f (x)j < " for all n N" :
We will proceed in three steps: a) …nd a candidate for f; b) establish that the
sequence ffn g1n=0 converges to f in the sup-norm and c) show that f 2 C(X):
1. Since ffn g1n=0 is Cauchy, for each " > 0 there exists M" such that supx2X jfn (x)
fm (x)j < " for all n; m M" : Now …x a particular x 2 X: Then ffn (x)g1 n=0
is just a sequence of numbers. Now
But since ffn g1n=0 converges to f; there exists N" such that supx2X jf (x)
fn (x)j < " for all n N" : Fix an " and take K = KN" + 2": It is obvious
that supx2X jf (x)j K: Hence f is bounded. Finally we qprove continuity
L L
PL
of f: Let us choose the metric on R to be jjx yjj = l=1 jxl yl jL .
We need to show that for every " > 0 and every x 2 X there exists a
("; x) > 0 such that if jjx yjj < ("; x) then jf (x) f (y)j < ", for
all x; y 2 X: Fix " and x: Pick a k large enough so that d(fk ; f ) < 3"
(which is possible as ffn g1n=0 converges to f ): Choose ("; x) > 0 such
that jjx yjj < ("; x) implies jfk (x) fk (y)j < 3" : Since all fn 2 C(X);
fk is continuous and hence such a ("; x) > 0 exists. Now
jf (x) f (y)j jf (x) fk (x)j + jfk (x) fk (y)j + jfk (y) f (y)j
d(f; fk ) + jfk (x) fk (y)j + d(fk ; f )
" " "
+ + ="
3 3 3
We now can state and prove the contraction mapping theorem. Let by
vn = T n v0 2 S denote the element in S that is obtained by applying the
operator T n-times to v0 ; i.e. the n-th element in the sequence starting with an
arbitrary v0 and de…ned recursively by vn = T vn 1 = T (T vn 2 ) = = T n v0 :
Then we have
A few remarks before the proof. Part a) of the theorem tells us that there
is a v 2 S satisfying v = T v and that there is only one such v 2 S: Part
b) asserts that from any starting guess v0 ; the sequence fvn g1 n=0 as de…ned
recursively above converges to v at a geometric rate of : This last part is
important for computational purposes as it makes sure that we, by repeatedly
applying T to any (as crazy as can be) initial guess v0 2 S, will eventually
converge to the unique …xed point and it gives us a lower bound on the speed
of convergence. But now to the proof.
Proof. First we prove part a) Start with an arbitrary v0 : As our candidate
for a …xed point we take v = limn!1 vn : We …rst have to establish that the
sequence fvn g1n=0 in fact converges to a function v : We then have to show that
this v satis…es v = T v and we then have to show that there is no other v^
that also satis…es v^ = T v^
4.3. THE CONTRACTION MAPPING THEOREM 79
where we used the way the sequence fvn g1n=0 was constructed, i.e. the fact that
vn+1 = T vn : For any m > n it then follows from the triangle inequality that
Note that the fact that T (limn!1 vn ) = limn!1 T (vn ) follows from the conti-
nuity of T:3
Now we want to prove that the …xed point of T is unique. Suppose there
exists another v^ 2 S such that v^ = T v^ and v^ 6= v : Then there exists c > 0 such
that d(^v ; v ) = a: But
0 < a = d(^
v ; v ) = d(T v^; T v ) d(^
v; v ) = a
a contradiction. Here the second equality follows from the fact that we assumed
that both v^; v are …xed points of T and the inequality follows from the fact
that T is a contraction.
We prove part b) by induction. For n = 0 (using the convention that T 0 v =
v) the claim automatically holds. Now suppose that
k
d(T k v0 ; v ) d(v0 ; v )
d(vn v ) < (") implies d(T (vn ) T (v )) < ": Hence the sequence fT (vn )g1n=0 converges and
limn!1 T (vn ) is well-de…ned. We showed that limn!1 vn = v : Hence both limn!1 T (vn )
and limn!1 vn are well-de…ned. Then obviously limn!1 T (vn ) = T (v ) = T (limn!1 vn ):
80 CHAPTER 4. MATHEMATICAL PRELIMINARIES
But
k+1
d(T k+1 v0 ; v ) = d(T T k v0 ; T v ) d(T k v0 ; v ) d(v0 ; v )
where the …rst inequality follows from the fact that T is a contraction and the
second follows from the induction hypothesis.
The following corollary, which I will state without proof, will be very useful
in establishing properties (such as continuity, monotonicity, concavity) of the
unique …xed point v and the associated policy correspondence.
[T (f + a)](x) [T f ](x) + a
Proof. In terms of notation, if f; g 2 B(X) are such that f (x) g(x) for
all x 2 X; then we write f g: We want to show that if the operator T satis…es
conditions 1. and 2. then there exists 2 (0; 1) such that for all f; g 2 B(X)
we have that d(T f; T g) d(f; g):
Fix x 2 X: Then f (x) g(x) supy2X jf (y) g(y)j: But this is true for all
x 2 X: So using our notation we have that f g + d(f; g) (which means that for
any value of x 2 X; adding the constant d(f; g) to g(x) gives something bigger
than f (x):
4.3. THE CONTRACTION MAPPING THEOREM 81
Tf T [g + d(f; g)]
T g + d(f; g)
Tf Tg d(f; g)
(T f T g) d(g; f ) = d(f; g)
Therefore
sup j(T f ) (x) (T g) (x)j = d(T f; T g) d(f; g)
x2X
[T (f a)](x) [T f ](x) + a
De…ne as our metric space (B[0; 1); d) the space of bounded functions on [0; 1)
with d being the sup-norm. We want to argue that this operator has a unique
…xed point and we want to apply Blackwell’s theorem and the CMT. So let us
verify that all the hypotheses for Blackwell’s theorem are satis…ed.
1. First we have to verify that the operator T maps B[0; 1) into itself (this
is very often forgotten). So if we take v to be bounded, since we assumed
that U is bounded, then T v is bounded. Note that you may be in big
trouble here if U is not bounded.4
= T w(k)
Even by applying the policy gv (k) (which need not be optimal for the
situation in which the value function is w) gives higher T w(k) than T v(k):
Choosing the policy for w optimally does only improve the value (T v) (k):
= T v(k) + a
4 Somewhat surprisingly, in many applications the problem is that u is not bounded below;
Hence the neoclassical growth model with bounded utility satis…es the Su¢ -
cient conditions for a contraction and there is a unique …xed point to the func-
tional equation that can be computed from any starting guess v0 be repeated
application of the T -operator.
One can also prove some theoretical properties of the Howard improvement
algorithm using the Contraction Mapping Theorem and Blackwell’s conditions.
Even though we could state the results in much generality, we will con…ne our
discussion to the neoclassical growth model. Remember that the Howard im-
provement algorithm iterates on feasible policies [TBC]
The function h gives the value of the maximization problem, conditional on the
state x: We de…ne
Hence G is the set of all choices y that attain the maximum of f , given the state
x; i.e. G(x) is the set of argmax’es. Note that G(x) need not be single-valued.
In the example that we study the function f will consist of the sum of the
current return function r and the continuation value v and the constraint set
describes the resource constraint. The theorem of the maximum is also widely
used in microeconomics. There, most frequently x consists of prices and income,
f is the (static) utility function, the function h is the indirect utility function,
is the budget set and G is the set of consumption bundles that maximize utility
at x = (p; m):
Before stating the theorem we need a few de…nitions. Let X; Y be arbitrary
sets (in what follows we will be mostly concerned with the situations in which
X and Y are subsets of Euclidean spaces. A correspondence : X ) Y maps
each element x 2 X into a subset (x) of Y: Hence the image of the point x
under may consist of more than one point (in contrast to a function, in which
the image of x always consists of a singleton).
The proof is somewhat tedious and omitted here (you probably have done
it in micro anyway).
Chapter 5
Dynamic Programming
has a unique solution which is approached from any initial guess v0 at geometric
speed. What we were really interested in, however, was a problem of sequential
form (SP )
1
X
t
w(x0 ) = sup F (xt ; xt+1 )
fxt+1 g1
t=0 t=0
s:t: xt+1 2 (xt )
x0 2 X given
Note that I replaced max with sup since we have not made any assumptions
so far that would guarantee that the maximum in either the functional equation
or the sequential problem exists. In this section we want to …nd out under what
conditions the functions v and w are equal and under what conditions optimal
sequential policies fxt+1 g1 t=0 are equivalent to optimal policies y = g(x) from
the recursive problem, i.e. under what conditions the principle of optimality
holds. It turns out that these conditions are very mild.
In this section I will try to state the main results and make clear what they
mean; I will not prove the results. The interested reader is invited to consult
Stokey and Lucas or Bertsekas. Unfortunately, to make our results precise
additional notation is needed. Let X be the set of possible values that the state
x can take. X may be a subset of a Euclidean space, a set of functions or
something else; we need not be more speci…c at this point. The correspondence
: X ) X describes the feasible set of next period’s states y; given that today’s
85
86 CHAPTER 5. DYNAMIC PROGRAMMING
A = f(x; y) 2 X X : y 2 (x)g
The period return function F : A ! R maps the set of all feasible combinations
of today’s and tomorrow’s state into the reals. So the fundamentals of our
analysis are (X; F; ; ): For the neoclassical growth model F and describe
preferences and X; describe the technology.
We call any sequence of states fxt g1
t=0 a plan. For a given initial condition
x0 ; the set of feasible plans (x0 ) from x0 is de…ned as
(x0 ) = ffxt g1
t=1 : xt+1 2 (xt )g
Hence (x0 ) is the set of sequences that, for a given initial condition, satisfy all
the feasibility constraints of the economy. We will denote by x a generic element
of (x0 ): The two assumptions that we need for the principle of optimality are
basically that for any initial condition x0 the social planner (or whoever solves
the problem) has at least one feasible plan and that the total return (the total
utility, say) from all feasible plans can be evaluated. That’s it. More precisely
we have
Assumption 1: (x) is nonempty for all x 2 X
Assumption 2: For all initial conditions x0 and all feasible plans x 2 (x0 )
n
X
t
lim F (xt ; xt+1 )
n!1
t=0
2. De…ne F + (x; y) = maxf0; F (x; y)g and F (x; y) = maxf0; F (x; y)g:
5.1. THE PRINCIPLE OF OPTIMALITY 87
or both. For example, if 2 (0; 1) and F is bounded above, then the …rst
condition is satis…ed, if 2 (0; 1) and F is bounded below then the second
condition is satis…ed.
For each feasible plan un gives the total discounted return (utility) up until
period n: If assumption 2 is satis…ed, then the function u : (x0 ) ! R
n
X
t
u(x) = lim F (xt ; xt+1 )
n!1
t=0
is also well-de…ned, since under assumption 2 the limit exists. The range of
u is R; the extended real line, i.e. R = R [ f 1; +1g since we allowed the
limit to be plus or minus in…nity. From the de…nition of u it follows that under
assumption 2
w(x0 ) = sup u(x)
x2 (x0 )
then v = w
I will skip the proof, but try to provide some intuition. The …rst result
states that the supremum function from the sequential problem (which is well-
de…ned under assumption 1. and 2.) solves the functional equation. This result,
although nice, is not particularly useful for us. We are interested in solving the
sequential problem and in the last section we made progress in solving the
functional equation (not the other way around).
But result 2. is really key. It states a condition under which a solution
to the functional equation (which we know how to compute) is a solution to
the sequential problem (the solution of which we desire). Note that the func-
tional equation (F E) may (or may not) have several solution. We haven’t made
enough assumptions to use the CMT to argue uniqueness. However, only one
of these potential several solutions can satisfy (5:1) since if it does, the theo-
rem tells us that it has to equal the supremum function w (which is necessarily
unique). The condition (5:1) is somewhat hard to interpret (and SLP don’t
even try), but think about the following. We saw in the …rst lecture that for
in…nite-dimensional optimization problems like the one in (SP ) a transversality
condition was often necessary and (even more often) su¢ cient (jointly with the
Euler equation). The transversality condition rules out as suboptimal plans that
postpone too much utility into the distant future. There is no equivalent condi-
tion for the recursive formulation (as this formulation is basically a two period
formulation, today vs. everything from tomorrow onwards). Condition (5:1)
basically requires that the continuation utility from date n onwards, discounted
to period 0; should vanish in the time limit. In other words, this puts an upper
limit on the growth rate of continuation utility, which seems to substitute for
the TVC. It is not clear to me how to make this intuition more rigorous, though.
A simple, but quite famous example, shows that the condition (5:1) has
some bite. Consider the following consumption problem of an in…nitely lived
household. The household has initial wealth x0 2 X = R: He can borrow or
lend at a gross interest rate 1 + r = 1 > 1: So the price of a bond that pays o¤
one unit of consumption is q = : There are no borrowing constraints, so the
sequential budget constraint is
ct + xt+1 xt
s:t: 0 ct xt xt+1
x0 given
Since there are no borrowing constraint, the consumer can assure herself in…nite
utility by just borrowing an in…nite amount in period 0 and then rolling over the
debt by even borrowing more in the future. Such a strategy is called a Ponzi-
scheme -see the hand-out. Hence the supremum function equals w(x0 ) = +1
for all x0 2 X: Now consider the recursive formulation (we denote by x current
period wealth xt ; by y next period’s wealth and substitute out for consumption
ct = xt xt+1 (which is OK given monotonicity of preferences)
Obviously the function w(x) = +1 satis…es this functional equation (just plug
in w on the right side, since for all x it is optimal to let y tend to 1 and hence
v(x) = +1: This should be the case from the …rst part of the previous theorem.
But the function v(x) = x satis…es the functional equation, too. Using it on the
right hand side gives, for an arbitrary x 2 X
Note, however that the second part of the preceding theorem does not apply
for v since the sequence fxn g de…ned by xn = xn0 is a feasible plan from x0 > 0
and
lim n v(xn ) = lim n xn = x0 > 0
n!1 n!1
Note however that the second part of the theorem gives only a su¢ cient con-
dition for a solution v to the functional equation being equal to the supremum
function from (SP ); but not a necessary condition. Also w itself does not satisfy
the condition, but is evidently equal to the supremum function. So whenever
we can use the CMT (or something equivalent) we have to be aware of the fact
that there may be several solutions to the functional equation, but at most one
the several is the function that we look for.
Now we want to establish a similar equivalence between the sequential prob-
lem and the recursive problem with respect to the optimal policies/plans. The
…rst observation. Solving the functional equation gives us optimal policies
y = g(x) (note that g need not be a function, but could be a correspondence).
Such an optimal policy induces a feasible plan f^ xt+1 g1
t=0 in the following fash-
ion: x0 = x ^0 is an initial condition, x
^1 2 g(^
x0 ) and recursively x^t+1 = g(^xt ):
The basic question is how a plan constructed from a solution to the functional
equation relates to a plan that solves the sequential problem. We have the
following theorem.
90 CHAPTER 5. DYNAMIC PROGRAMMING
1. Let x 2 (x0 ) be a feasible plan that attains the supremum in the sequential
problem. Then for all t 0
2. Let x
^2 (x0 ) be a feasible plan satisfying, for all t 0
w(^
xt ) = F (^
xt ; x
^t+1 ) + w(^
xt+1 )
and additionally1
t
lim sup w(^
xt ) 0 (5.2)
t!1
Then x
^ attains the supremum in (SP ) for the initial condition x0 :
What does this result say? The …rst part says that any optimal plan in the
sequence problem, together with the supremum function w as value function
satis…es the functional equation for all t: Loosely it says that any optimal plan
from the sequential problem is an optimal policy for the recursive problem (once
the value function is the right one).
Again the second part is more important. It says that, for the “right”
…xed point of the functional equation w the corresponding policy g generates
a plan x^ that solves the sequential problem if it satis…es the additional limit
condition. Again we can give this condition a loose interpretation as standing
in for a transversality condition. Note that for any plan f^ xt g generated from a
policy g associated with a value function v that satis…es (5:1) condition (5:2) is
automatically satis…ed. From (5:1) we have
t
lim v(xt ) = 0
t!1
for any feasible fxt g 2 (x0 ); all x0 : Also from Theorem 32 v = w: So for any
plan f^
xt g generated from a policy g associated with v = w we have
w(^
xt ) = F (^
xt ; x
^t+1 ) + w(^
xt+1 )
t
and since limt!1 v(^
xt ) exists and equals to 0 (since v satis…es (5:1)); we have
t
lim sup v(^
xt ) = 0
t!1
and hence (5:2) is satis…ed. But Theorem 33.2 is obviously not redundant as
there may be situations in which Theorem 32.2 does not apply but 33.2 does.
1 The limit superior of a bounded sequence fx g is the in…mum of the set V of real numbers
n
v such that only a …nite number of elements of the sequence strictly exceed v: Hence it is the
largest cluster point of the sequence fxn g:
5.1. THE PRINCIPLE OF OPTIMALITY 91
Let us look at the following example, a simple modi…cation of the saving problem
from before. Now however we impose a borrowing constraint of zero.
1
X
t
w(x0 ) = max1 (xt xt+1 )
fxt+1 gt=0
t=0
xt
s:t: 0 xt+1
x0 given
v(x) = max
0 x
fx x0 + v(x0 )g
0 x
and we can conclude by Theorem 33.2 that this plan is optimal for the sequential
problem. There are tons of other plans for which we can apply the same logic to
shop that they are optimal, too (which shows that we obviously can’t make any
claim about uniqueness). To show that condition (5:2) has some bite consider
the plan de…ned by x^t = x0t : Obviously this is a feasible plan satisfying
w(^
xt ) = F (^
xt ; x
^t+1 ) + w(^
xt+1 )
Theorem 33.2 does not apply and we can’t conclude that f^ xt g is optimal (as in
fact this plan is not optimal).
So basically we have a prescription what to do once we solved our functional
equation: pick the right …xed point (if there are more than one, check the limit
condition to …nd the right one, if possible) and then construct a plan from the
92 CHAPTER 5. DYNAMIC PROGRAMMING
policy corresponding to this …xed point. Check the limit condition to make sure
that the plan so constructed is indeed optimal for the sequential problem. Done.
Note, however, that so far we don’t know anything about the number (unless
the CMT applies) and the shape of …xed point to the functional equation. This
is not quite surprising given that we have put almost no structure onto our
economy. By making further assumptions one obtains sharper characterizations
of the …xed point(s) of the functional equation and thus, in the light of the
preceding theorems, about the solution of the sequential problem.
We will now assume that F : X X is bounded and 2 (0; 1): We will make
the following two assumptions throughout this section
Assumption 3: X is a convex subset of RL and the correspondence :
X ) X is nonempty, compact-valued and continuous.
Assumption 4: The function F : A ! R is continuous and bounded, and
2 (0; 1)
We immediately get that assumptions 1. and 2. are satis…ed and hence
the theorems of the previous section apply. De…ne the policy correspondence
connected to any solution to the functional equation as
Here C(X) is the space of bounded continuous functions on X and we use the
sup-metric as metric. Then we have the following
y + (1 )y 0 2 ( x + (1 )x0 )
Again we …nd that the properties assumed about F extend to the value function.
Theorem 47 Under Assumptions 3.-4. and 7.-8. the unique …xed point of v
is strictly concave and the optimal policy is a single-valued continuous function,
call it g.
This theorem gives us an easy way to derive Euler equations from the re-
cursive formulation of the neoclassical growth model. Remember the functional
equation
v(k) = max 0
U (f (k) k 0 ) + v(k 0 )
0 k f (k)
Taking …rst order conditions with respect to k 0 (and ignoring corner solutions)
we get
U 0 (f (k) k 0 ) = v 0 (k 0 )
94 CHAPTER 5. DYNAMIC PROGRAMMING
Denote by k 0 = g(k) the optimal policy. The problem is that we don’t know v 0 :
But now we can use Benveniste-Scheinkman to obtain
Denoting k = kt ; g(k) = kt+1 and g(g(k)) = kt+2 we obtain our usual Euler
equation
U 0 (f (kt ) kt+1 ) = f (kt+1 )U 0 (f (kt+1 ) kt+2 )