Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
2 views

mcnotes61

The document provides lecture notes on mathematical economics, focusing on economic dynamics and fixed point theorems. It discusses concepts such as Brouwer's fixed point theorem, Kakutani's fixed point theorem, and the contraction mapping theorem, which are essential for finding equilibria in economic models. Additionally, it introduces dynamic programming and the principle of optimality, detailing how to solve optimization problems in economics using functional equations.

Uploaded by

eugenio
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
2 views

mcnotes61

The document provides lecture notes on mathematical economics, focusing on economic dynamics and fixed point theorems. It discusses concepts such as Brouwer's fixed point theorem, Kakutani's fixed point theorem, and the contraction mapping theorem, which are essential for finding equilibria in economic models. Additionally, it introduces dynamic programming and the principle of optimality, detailing how to solve optimization problems in economics using functional equations.

Uploaded by

eugenio
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

Simon Fraser University, Department of Economics

Econ 798 { Introduction to Mathematical Economics


Prof. Alex Karaivanov
Lecture Notes 6

1 Introduction to Economic Dynamics


1.1 Mathematical preliminaries
In this section I introduce some mathematical concepts and result which will be used in the
applications of the theory later.

1.1.1 Fixed point theorems


Often in economics we have the problem of nding an equilibrium (steady state). Mathemat-
ically this is frequently translated into nding a solution of a functional equation of the form:
T x = x (i.e., the \image" of x is also x) where T is some \mapping" (could be a function or
more general) from a set X into itself. If such x exists we call it a xed point of the mapping T .

Theorem 61 (Brouwer's xed point theorem)

Let A be a non-empty, compact, convex subset of Rn and f be a continuous function


from A to A. Then f has a xed point, i.e. 9x 2 A s.t. f (x) = x.

\Proof" for the case n = 1: Show rst that A must be a closed interval, [a; b]. De ne
g(x) f (x) x: We have g(a) 0 and g(b) 0. Since g is a continuous function, then there
must exist a point c 2 [a; b] at which g(c) = 0. It is a xed point of f (x):

Remarks:

* closed is important { e.g., take f (x) = (x + 1)=2 on ( 1; 1)


* convex is important { e.g., think of rotating a donut
* continuous is important { obvious (think of a line not crossing the 45-degree line on [0; 1])
* from A to A is important { e.g. f (x) = x + 2 on [0; 1]
* bounded is important { take A = R and f (x) = x + 1

Unfortunately, the above theorem is applicable only to functions (single-valued mappings).


In order to generalize the above theorem to more general mappings that we will need, we need
to de ne some concepts rst.
Correspondence below refers to a mapping from a set X to a set Y which for any single
argument in X can return a set in Y as output (e.g., map a real number into an interval).

De nition (continuity of a correspondence)

1
(a) A correspondence F : X ! Y is upper hemicontinuous (uhc) at x 2 X if F (x) is
non-empty and if for every sequence fxn g 2 X, every sequence fyn g with yn 2 F (xn )
and every y 2 Y; we have that xn ! x and yn ! y implies y 2 F (x):
(b) A correspondence F : X ! Y is lower hemicontinuous (lhc) at x if F (x) is
non-empty and if for every y 2 F (x) and every sequence fxn g; s.t. xn ! x; 9fxnk g,
a subsequence of fxn g and yk 2 F (xnk ) such that yk ! y.
(c) A correspondence is continuous if it is both lower and upper hemicontinuous.
* Intuitively: uhc means that the set F (x) can \expand with a jump" at x but cannot
\shrink with a jump" (draw an example). Lhc is the opposite.
Theorem 62 (Kakutani's xed point theorem)
Let S be non-empty, compact, convex subset of Rn . Let F be an upper hemicontin-
uous correspondence from S into 2S (2S is called the power set of S and denotes
the set of all subsets of S) such that 8x 2 S the set F (x) is non-empty, closed and
convex. Then there exists x 2 S s.t. x 2 F (x ) { a xed point of F:
8
< 0:6 if 0 x < 0:5
Counterexample: take f (x) = f0:6; 0:4g if x = 0:5 satis es all conditions apart from
:
0:4 if 0:5 < x 1
f (x) being convex at x = 0:5

1.1.2 The Contraction mapping theorem


The main result derived in this section is a very general xed point theorem which can be
applied to basically any type of mappings (operators) satisfying certain conditions. Before we
state the result we need to de ne some concepts.
De nition (Contraction mapping, CM)
Let (S; ) be a metric space with metric and T : S ! S maps S (some set with
elements numbers or functions, etc.) into itself. T is called a contraction mapping
(with modulus ) if for some 2 (0; 1)
(T x; T y) (x; y) for all x; y 2 S; x 6= y

In the above de nition T x denotes the image of x after applying the mapping T:

Example:
Take S = [a; b] 2 R with (x; y) = jx yj: Then T : S ! S is a CM if for some 2 (0; 1)
jT x T yj
< < 1 for all x; y 2 S; x 6= y
jx yj
i.e. for example if T is a function with slope uniformly less than 1. A point x satisfying
T x = x is called a xed point of T:

2
Theorem 63 (Contraction mapping theorem)
If (S; ) is a complete metric space (such that every convergent sequence in it con-
verges to a point in it) and T : S ! S is a contraction mapping with modulus
then:
(a) T has a unique xed point in S; denoted v
(b) For any v0 2 S; (T n v0 ; v) n
(v0 ; v); n = 1; 2; :::; where T n x means
applying T n times on x; i.e. T n+1 x = T (T n x):
Note that (b) bounds the distance between the n th approximation and the xed point.
However, if v is not known this bound is unknown too and thus useless. Instead we can show
that for any v0 2 S
1
(T n v0 ; v) (T n v0 ; T n+1 v0 )
1
Corollary 1
Let (S; ) be a complete metric space and T : S ! S be a CM with a xed point
v 2 S: If S 0 is a closed subset of S and T (S 0 ) S 0 then v 2 S 0 .
The above statement tells us that if T shrinks the set S when applied repeatedly the xed
point stays inside the resulting set. This result is very useful for some game theoretical appli-
cations.
Corollary 2 (N-stage contraction theorem)
Let (S; ) be a complete metric space, T : S ! S and suppose that for some integer
N 1, T N : S ! S is a CM with modulus where T N means applying T N times.
Then:
(a) T has exactly one xed point in S; v
(b) For any v0 2 S; (T kN v0 ; v) k
(v0 ; v); k = 0; 1; :::
The above theorems do not tell us however how we can nd those contraction mappings,
i.e. how to check if an operator T is a CM. The following result provides su cient conditions.
d
* Example: exp( x) is not a CM on R+ but exp( exp( x)) is. Why? j dx exp( exp( x))j =
j exp( x exp( x))j < e0 = 1
Theorem 64 (Blackwell's conditions)
Let X Rl and B(X) be a space of bounded functions f : X ! R with the sup
norm (i.e. jjf jj = supt2X jf (t)j): Let T : B(X) ! B(X) be an operator satisfying:
(i) monotonicity: if f; g 2 B(X) and f (x) g(x) 8x 2 X then (T f )(x)
(T g)(x); 8x 2 X:
(ii) discounting: 9 2 (0; 1) s.t. (T (f + a))(x) (T f )(x) + a; 8f 2 B(X);
8a 0; 8x 2 X; where (f + a)(x) is the function de ned by (f + a)(x) = f (x) + a:
Then T is a contraction mapping with modulus :

3
1.2 Dynamic programming
1.2.1 Preliminaries
We will be interested in problems of the type (those arise all the time in macroeconomics):

X
1
t
sup F (xt ; xt+1 )
fxt+1 g1
t=0 t=0

s.t. xt+1 2 (xt ); t = 0; 1; ::: (SP)


x0 2 X; given.

We will call the above problem \the Sequence


P Problem (SP)" as it involves picking a sequence
fxt+1 g to maximize the discounted sum 1 t=0
t
F (xt ; xt+1 ) subject to the constraint that xt+1
is in some set depending on xt : t is usually interpreted as time in such problems.
Notice that we have an in nite number of variables so wePcannot just set up the system
of rst order conditions and solve for the optimal xt : We call 1 t=0
t
F (xt ; xt+1 ) the objective
function and (xt ) the set of constraints.
It turns out that a possible way to solve the above problem is to transform it into a di erent
one which would allow us to apply the theory about contraction mappings. Denote by v(x) the
value corresponding to the supremum (least upper bound) of the objective from time t onwards,
i.e.,
2
v(x0 ) = sup F (x0 ; x1 ) + F (x1 ; x2 ) + F (x2 ; x3 ) + :::
fxt+1 g1
t=0 ; xt+1 2 (xt )
2
v(x1 ) = sup F (x1 ; x2 ) + F (x2 ; x3 ) + F (x3 ; x4 ) + :::
fxt+1 g1
t=1 ; xt+1 2 (xt )

...etc.

Note that continuing in this recursive way the subscripts become irrelevant as all we care about
is what the current-period value xt is. That is, the maximization problem is exactly the same
at any period (e.g., see Example below, still a cake has to be eaten in in nitely many remaining
periods by the exact same consumer) { the only thing that changes is the initial value. Hence,
intuitively we can re-write the original (SP) problem as a \generic" problem with current value
x in which we search for the function v that satis es:

v(x) = sup F (x; y) + v(y); 8x 2 X (FE)


y2 (x)

The above is an equation in the unknown function v and is called the Functional Equation
(FE) while v which satis es it is called the value function. Dynamic programming deals with
solving dynamic optimization problems using FEs.
We will study the relationship between the solutions to the (SP) and the (FE) and develop
methods to analyze the latter. Let us start with some important de nitions.

De nition (Graph)

4
Let : X ! Y be a correspondence and de ne the set,

A = f(x; y) : y 2 (x)g

A is called the graph of :

* Example: take x 2 [0; 1] and let (x) = [0; x]: Then the graph of is the area below the
45-degree line between x = 0 and x = 1 in a two-dimensional graph with x on the horizontal
axis and values from (x) on the vertical axis.

Now we are ready to state the main result of the section:

Theorem 65 (Theorem of the maximum)

Let X Rl and Y Rm , f : X Y ! R be a continuous function and let


: X ! Y be a compact-valued and continuous correspondence. Then the function
h : X ! R de ned as h(x) = max f (x; y) is continuous and the correspondence
y2 (x)
G : X ! Y de ned as G(x) = fy 2 (x) : h(x) = max f (x; y)g is non-empty,
y2 (x)
compact-valued and uhc.

Notice that the above theorem provides conditions under which a value function would be
continuous and under which the set G(x) of maximizers of f will be non-empty and compact
valued. The following results are related to the theorem of the maximum.

Corollary

Suppose in addition to the conditions in Theorem 65 we have that is convex-valued


and f is strictly concave in y: Then G is a single-valued and continuous function,
g(x):

1.2.2 The Principle of Optimality


Armed with the theory from the previous section we are nally ready to go back to the question
of what is the relationship between the solutions to the problems (SP) and (FE). The general
idea is that:
(i) the solution to the (FE) evaluated at x0 gives the value of the supremum of (SP) and
(ii) a sequence fxt+1 g1
t=0 attains the supremum in (SP) if and only if it satis es:

v(xt ) = F (xt ; xt+1 ) + v(xt+1 ); for t = 0; 1; :::

These ideas, rst stated by Bellman are known as the Principle of Optimality. In the
discussion below we will study the conditions under which the Principle of Optimality holds.
Let us start by de ning the notation and terminology which will be used in this section:

Notation:

5
{ X is the set of all possible values for the so-called state variable x and is called the
state space.
{ : X ! X is a correspondence describing the feasibility constraints or the feasible
set.
{ A = f(x; y) 2 X X : y 2 (x)g is the graph of .
{ F : A ! R is called the return function.
{ 0 is called the discount factor.

Thus X; ; F and are the givens in our problem. First, we need to establish conditions
under which (SP) is well-de ned, i.e. the feasible set is non-empty and the objective function
is well de ned for all points in the feasible set. Call a sequence fxt g1 t=0 in X a plan. Given
x0 2 X let
(x0 ) = ffxt g1
t=0 : xt+1 2 (xt ); t = 0; 1:::g

be the set of feasible plans from x0 . Let x~ = (~


x0 ; x~1 ; ::) denote a typical element in (x0 ).
The following assumption ensures that (x0 ) is non-empty, 8x0 2 X.

Assumption A1: (x) is non-empty for all x 2 X.

Next, we need to ensure that the objective function is well de ned. We make the following:
P
n
t
Assumption A2: For all x0 2 X and all x~ 2 (x0 ), limn!1 F (~
xt ; x~t+1 ) exists.
t=0

There are many ways to satisfy A2, the simplest is to assume that F is bounded (for this
it may help to assume/know that X is bounded) and 2 (0; 1). For each n = 0; 1; :: de ne
un : (x0 ) ! R as:
Pn
t
un (~
x) F (~
xt ; x~t+1 )
t=0

that is, un is the partial sum from a feasible plan x~. Using A2 we can then also de ne:

u(~
x) lim un (~
x)
n!1

Since under A1-A2 the objective function is well-de ned and the set of feasible plans is non-
empty, then we can de ne the supremum function, v : X ! R by:

v (x0 ) sup u(~


x)
x
~2 (x0 )

i.e., v (x0 ) is the value of the supremum in (SP).


We are interested in the connection between the supremum function v as de ned above
and the solutions (which will be called value functions), v to problem (FE). Note that under
assumptions A1-A2 the function v (x0 ) is uniquely de ned for any x0 (think why!) although v
may not be.

6
De nition (\satis es the FE")

Suppose jv (x0 )j < 1. We say that v \satis es the (FE)" if the following conditions hold:
(i) 8y 2 (x0 )
v (x0 ) F (x0 ; y) + v (y)
(ii) 8" > 0, 9y 2 (x0 ) so that,

v (x0 ) F (x0 ; y) + v (y) + "

We are now ready for our rst result on the relationship between v in (SP) as de ned above
and the function v in (FE).

You might also like