First Exam 18 Dec 2012 Text 1
First Exam 18 Dec 2012 Text 1
x2 + z 2 1
Consider the problem of finding global max and min of f under such constraints.
(1 pts) a. Check if CQ are always satisfied.
(1 pts) b. Write the system of necessary conditions.
(2 pts) c. Find the critical points.
(2 pts) d. Find global extremals.
SOLUTION.
a. The constraint functions are
g1 (x, y, z) = y ,
g2 (x, y, z) = y ,
g3 (x, y, z) = x2 + z 2 1,
We check if CQ hold.
When only one constraint is active we need to check that its gradient in nonzero. This is always
true as for the first two constraints as they have constant nonzero gradients. For the third one
the gradient is zero only in the points where x = z = 0 but in such points the third constraint
is not active.
When two constraints are active their gradients must be LI. The first two are LD but they are
never active together. If g1 and g3 are active their gradients are LD iff x = z = 0 but in such
points the third constraint cannot not active. We argue similarly if g2 and g3 are active.
The three constraints cannot be active together.
b. Note that f is C 1 in R3 since it is a polynomial multiplied by a trigonometric function. So we can write
the KT system in every point of the required region. Since f (x, y, z) = (3x2 cos y, x3 sin y, 3z 2 ),
so the KT system is
3x 3cos y = 23 x
x 2sin y = 1 + 2
3z = 23 z
1 (y ) = 0
(1)
2 (y )
3 (x2 + z 2 1) = 0
2
x + z2 1
Moreover the sign of the multipliers is non negative for max and non positive for min.
c. Call D the region delimited by our constraints. We first find the interior critical points solving the
system
3x 3cos y = 0
x sin y = 0
3z 2 = 0
(2)
< y <
2
x + z2 < 1
From the third equation we get z = 0. From the first we get either x = 0 or y =
or y = 2 .
If x = 0 the second equation is satisfied so all points (0, y, 0) with y (, ) are critical.
If y = 2 from the second equation we still get x = 0 so we do not find other critical points.
3x2 = 0
0 = 1
3z 2 = 0
y =
2
x + z2 < 1
(3)
It is easily seen that the unique solution is the point (0, , 0) with 1 = 0. Here f (0, , 0) = 0 as
for the points of I1 . Now we consider the case where only g2 = 0, so y = . In this case the system
becomes
3x2 = 0
0 = 2
3z 2 = 0
(4)
y=
2
x + z2 < 1
It is easily seen that the unique solution is the point (0, , 0) with 2 = 0. Here f (0, , 0) = 0 as for
the points of I1 .
Now we consider the case where only g3 = 0, so x2 + z 2 = 1. In this case the system becomes
3x2 cos y = 23 x
x3 sin y = 0
3z 2 = 23 z
(5)
< y <
2
x + z2 = 1
From the second equation we get either x = 0 or y = 0.
If x = 0 the first equation is also satisfied. From the last we get z = 1. From the third we then
get 3 = 32 when z = 1 and 3 = 32 when z = 1. So we get the critical points (0, y, 1) and
(0, y, 1) for y (, ). In such points we have
f (0, y, 1) = 1,
f (0, y, 1) = 1.
If y = 0 we get from the first equation either x = 0 or x = 32 3 . The first case is included in
what said above. In the second case we have, from the third equation, either z = 0 or z = 23 3 .
If z = 0 we have, from the last equation, x = 1 so we get the critical points (1, 0, 0) and
(1, 0, 0). In such points we have
f (1, 0, 0) = 1,
f (1, 0, 0) = 1.
3
3 =
2 2
1
1
1
f , 0,
= ,
2
2
2
1
1
, 0,
2
2
1
= .
2
1
1
1
1
I2 = (0, , 0), (0, , 0), (1, 0, 0), (1, 0, 0), , 0,
, , 0,
2
2
2
2
{(0, y, 1) and
(0, y, 1) : y (, )}
Now we find constrained critical points where two constraints are active. If g1 and g3 are active the
system becomes
3x2 = 23 x
0 = 1
3z 2 = 23 z
(6)
y =
2
x + z2 = 1
from which we
get, arguingasfor the system (5), the critical points (0, , 1), (0, , 1), (1, , 0),
1 , , 1 . In such points we have
(1, , 0), 12 , , 12
2
2
f (0, , 1) = 1,
f (0, , 1) = 1,
1
1
, ,
2
2
1
= ,
2
1
1
, ,
2
2
3x2 = 23 x
0 = 2
3z 2 = 23 z
y=
2
x + z2 = 1
1
= .
2
(7)
from
which
we get, arguing
as above, the critical points (0, , 1), (0, , 1), (1, , 0), (1, , 0),
1 , , 1 . In such points we have
12 , , 12
2
2
f (0, , 1) = 1,
f (0, , 1) = 1,
1
1
1
, ,
= ,
2
2
2
1
1
, ,
2
2
1
= .
2
1
1
1
1
, ,
I3 = (0, , 1), (0, , 1), (1, , 0), (1, , 0), , ,
2
2
2
2
1
1
1
1
, ,
(0, , 1), (0, , 1), (1, , 0), (1, , 0), , ,
2
2
2
2
d. Since the region is compact (closed since it contains its boundary, bounded since it is contained e.g.
in a ball of radius 5 centered in the origin). So maxima an minima exist by the Weierstrass Theorem
and they must be reached in the critical points. We may compare all the values in the critical points
getting that all points where f = 1 (i.e. (0, y, 1) with y [, ] and (1, 0, 0)) are global minimum
points while all points where f = 1 (i.e. (0, y, 1) with y [, ] and (1, 0, 0)) are global maximum
points.
F (K, L) = ln 1 + K L1
over the domain R2+ . Consider the problem of maximizing F over the set
D = {(K, L) R2 : K 0, L 0, K + L = },
for > 0. Let
V (, ) = max F (K, L).
(K,L)D
SOLUTION.
(2 pts) a. In this part we do not have to find the maximum point. We apply directly envelope theorem. First of all
we find that the maximum must be on the part of D given by
E := {(K, L) R2 : K > 0, L > 0, K + L = }.
since at the extrema of the segment D we have F = 0 while it is F > 0 in E. Calling the Lagrangian
L(K, L, , ) := F (K, L, ) h(K, L, ),
where
h(K, L, ) = K + L
the envelope theorem says that
V (, )
=
L(K (, ), L (, ), (, ), , ) =
F (K (, ), L (, ), ).
Now
K
F (K, L, ) =
K L1 ln
1+K L
L
and so
V (, )
=
F (K (, ), L (, ), ) =
1
1+
(K (, )) (L (, ))1
L (, )1 K (, ) L (, )1 ln
K (, )
.
L (, )
=
L(K (, ), L (, ), (, ), , ) =
= (, )
h(K (, ), L (, ), ) = (, ).
(2 pts) b. To check the assumptions of the envelope theorem we need to use the IFT. Since all functions are C 2 in
R2++ and since the maximum point is located in such set we only need to check that the Hessian (in the
variables (K, L, ) of the Lagrangian above (sometimes called bordered Hessian) is invertible.
7. (6 total points) Consider the following controlled dynamical system in continuous time:
x0 (t) = f (x(t)) c(t);
x(0) = x0 R;
SOLUTION.
(4 pts) a. To find the equilibrium points and the phase diagram we need to find the graph of the function
f (x) = ex 4x. Since f 0 (x) = ex 4 we have f 0 (x) > 0 iff x > ln 4 so x = ln 4 is a global minimum
point of f . Since f (ln 4) = 4 4 ln 4 < 0 and limx f (x) = +, there are two zeros of f . We call
them x1 and x2 . Since f (0) = 1 > 0, f (1) = e 4 < 0, f (3) = e3 12 > 0 we have x1 (0, 1) and
x2 (ln 4, 3).
From the phase diagram we can easily see that x1 is stable while x2 is unstable and B(x1 ) = (, x2 ).
(2 pts) b. If c() 1 then the dynamical system is again an autonomous ODE with dynamics given by the
function g(x) = f (x) 1 = ex 4x 1. The phase diagram for this system is similar to the other
one. Again g has a unique global minimum point in x = ln 4 and its value is negative. So there are
two equilibrium points x
1 = 0 < x1 and x
2 > x2 . As above we have x
1 stable and x
2 unstable and
B(
x1 ) = (, x
2 ).
This fact implies that, if we start from e positive datum x0 then the solution remain always positive
(and so admissible) till it exists. It is possible to prove the the solutions starting from x0 (, x
2 ]
are global while the other are not.
+
X
2s ln (2c(s))
s=0
subject to
x(t + 1) = 3x(t) c(t), x(0) = x0
and under the constraints x(t) 0 for all t 0, c(t) 0 for all t 0. In particular:
(a) (1.5 points) State carefully the problem as an optimal control problem (in particular, recognize
whether the horizon is finite or infinite, describe the state and the control space, the set of admissible
controls. Write the Bellmans equation associated to the problem.
(b) (1.5 points) Write the necessary conditions of the Pontryagin Maximum Principle for such problem.
(c) (2 points) Assume that, for suitable values of the parameters A and B, the function w(x) = A+B ln x
is a solution to the Bellmans equation. Then find the candidate optimal feedback map and compute
the candidate optimal strategy and the associated candidate optimal trajectory.
(d) (1 points) Prove that w is the value function of the optimal control problem and that the state-control
couple found in the previous point is optimal.
(e) (1 points) Formulate a continuous time dynamic optimization problem equivalent to the one above.
Then for such problem write the Hamiltonians and the HJB equation. What form of the value
function do you expect?
SOLUTION.
(a) The problem is infinite horizon; state space and control space are X = [0, +), C =]0, +) (since
the logarithm is not defined at c = 0). Moreover C(0, x) = (0, 3x]. Bellmans equation is
1
z(x) = sup ln(2c) + z(3x c)
2
(0,3x]
(b) The necessary conditions are the following. If (c (), x ()) is an optimal couple then there exist a
function p() : N R such that for every t 0.
p(t) = 3p(t + 1)
Moreover, for every t 0
c(t) arg
max
c(0,3x(t)]
The transversality conditions are in general not necessary. It is good in any case to indicate them.
In this case they are:
lim p(t) = 0,
or
lim x(t)p(t) = 0.
t+
t+
(c) If z(x) is a solution of the Bellman equation the candidate optimal feedback map is
1
G(x) = arg max ln(2c) + z(3x c)
2
c(0,3x]
1 1 B
,
c 2 3x c
h00 (c) =
1
1
B
c2
2 (3x c)2
Clearly h00 (c) < 0 for each c (0, 3x). So the function is concave and every critical point is a
maximum point. Moreover
h0 (c) = 0
1
1 B
6x
=
Bc = 2(3x c) c =
c
2 3x c
B+2
6x
B+2
6x(t)
3B
=
x(t)
B+2
B+2
x (t) = x0
3B
B+2
6
3B
c (t) =
x0
B+2
B+2
3B
3B
1
1
A + B ln x0
= lim t A + B ln x0 + Bt ln
=0
lim
t+ 2
t+ 2t
B+2
B+2
so that z is the value function of the optimal control problem. This implies that the candidate
optimal couple found above is optimal.
(e) An equivalent problem can be the following. Maximize
Z +
J(c()) =
es ln (2c(s))
0
( = ln 2) subject to
and under the constraints x(t) 0 for all t 0, c(t) 0 for all t 0.
Here
H0CV (x, p; c) = ln(2c) + p(2x c)
H0M AX (x, p) = sup{ln(2c) + p(2x c)} = 2xp + ln 2 + sup{ln c pc} =
c0
c0
If p > 0 then
H0M AX (x, p) = 2xp + ln 2 ln p 1
If p 0 then
H0M AX (x, p) = +
The HJB equation is
We expect a value function of the same form as in the analogous discrete time case.