Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
142 views

Quantitative Methods in Economics II

The document summarizes a lecture on constrained optimization. It introduces the general constrained optimization problem of maximizing an objective function subject to constraints. It discusses existence of solutions and the use of Lagrange multipliers to derive first-order necessary conditions for constrained optimization problems with equality constraints. Examples are provided to illustrate the method. Second-order sufficient conditions are also briefly discussed. The key steps are to define the Lagrange function, take its partial derivatives to obtain first-order conditions, and solve the resulting system of equations.

Uploaded by

Joshi2014
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
142 views

Quantitative Methods in Economics II

The document summarizes a lecture on constrained optimization. It introduces the general constrained optimization problem of maximizing an objective function subject to constraints. It discusses existence of solutions and the use of Lagrange multipliers to derive first-order necessary conditions for constrained optimization problems with equality constraints. Examples are provided to illustrate the method. Second-order sufficient conditions are also briefly discussed. The key steps are to define the Lagrange function, take its partial derivatives to obtain first-order conditions, and solve the resulting system of equations.

Uploaded by

Joshi2014
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 8

EC2203 QUANTITATIVE METHODS IN ECONOMICS II (1997-1998)

Lecture 9. Constrained optimisation

9.1. Introduction
Problems of maximisation or minimisation subject to constraints are central to microeconomics,
and also arise in macroeconomics and econometrics. The general problem can be stated as
follows:

(CO) Choose (x1, . . ., xn) to maximise y = f(x1, . . ., xn) subject to (x1, . . ., xn)∈C.

The set C is a subset of the domain of the function, within which all n-dimensional vectors of the
form (x1, . . ., xn) satisfy one or more constraints. The function f is known as the objective
function, and C is a subset of En known as the constraint set. Note that the case where the
objective function is to be minimised (eg. a cost minimisation problem) is covered by this
formulation, since minimisation of f is equivalent to maximisation of -f.

The form the constraint set takes depends on the problem at hand. For example, if the problem is
one of consumer utility maximisation, then the constraint set would be the set of quantities of the
various goods that were non-negative and which satisfied the budget constraint.

As in the case of the unconstrained optimisation problem (U) in Lecture 8, a solution to (CO)
may not exist. For example, consider the problem of maximising f(x) = x subject to x∈C, where
C = {x| x ≥ 10}. There is no value of x < ∞ that solves this problem, as x (and therefore f) can be
made arbitrarily larger without leaving the constraint set. There is a pair of conditions that are
jointly sufficient for a solution to (CO) to exist. These are that (i) f is continuous; (ii) C is a
compact (closed and bounded) set. However, these are not necessary for a problem to have a
solution. Many economic problems, when set up in mathematical format, do not satisfy these
conditions, but nevertheless have a solution. It is often quite easy to check existence once the
particular problem is written down. Therefore from now on, we assume that (CO) has a solution.

Given the existence of a solution, the analysis of the problem (CO) proceeds by simplifying the
description of the set C. In this course, you will be examined on the simple case, constrained
optimisation with equality constraints. You will already have met examples of this in micro- and
macroeconomics. You will not be examined on the more general case, constrained optimisation
with inequality constraints, as this topic is usually reserved for postgraduate courses. However, I
will give you some notes on this in due course (probably after Christmas), in case you encounter
problems of this type in your research work next year. Unconstrained optimisation, constrained
optimisation with equality constraints, and constrained optimisation with inequality constraints
are covered in detail in Chapters 9, 11, 12 and 21 of Chiang, Fundamental Methods of
Mathematical Economics. In my opinion, Chiang’s exposition is a pedagogical ‘masterpiece’,
and there is no better introductory treatment of this topic in any other textbook. I strongly
recommend that you read him if you possibly can.

9.2. Constrained optimisation with equality constraints: first-order (necessary) conditions


Here, the idea is that the set of vectors (x1, . . ., xn) in C can be written as the set of vectors
satisfying the implicit relationship g(x1, . . ., xn) = 0, or more formally, C = {x1, . . ., xn| g(x1, . . .,
xn) = 0}. Often, g = 0 is called the equality constraint, and g is known as the constraint function.
So the problem (CO) becomes:

(CE) Choose (x1, . . ., xn) to maximise y = f(x1, . . ., xn) subject to g(x1, . . ., xn) = 0.
EC2203 QUANTITATIVE METHODS IN ECONOMICS II (1997-1998)

Examples:
(i). Suppose the problem is to maximise -(x12 + x22) subject to x1+x2 = 1. Then f = -(x12+x22), and
g = 1- x1 - x2, or g = x1 + x2 - 1.
(ii). Let x1 and x2 be a consumer’s consumption levels of goods 1 and 2 respectively, and suppose
that consumer preferences are represented by the utility function u(x1, x2) = β 1logex1 + β 2logex2.
Also suppose that the budget constraint is p1x1 + p2x2 = m, where pi is the price of good i, and m
is income. Then the consumer allocation problem is to choose x1, . . ., xn to maximise utility
subject to the budget constraint. This is an equality constrained optimisation problem, with f =
β 1logex1 + β 2logex2, and g = m - p1x1 - p2x2.

To solve an equality constrained optimisation problem, we begin by defining the Lagrangean


function for the problem (CE) as

L(x1, . . ., xn, λ) = f(x1, . . ., xn) + λg(x1, . . ., xn)

where λ is known as the Lagrange multiplier.

Examples:
In example (i) above, L(x1, x2, λ) = -(x12 + x22) + λ(1 - x1 - x2). In example (ii) above,
L(x1, x2, λ) = β 1logex1 + β 2logex2 + λ(m - p1x1 - p2x2).

The Lagrangean function gives us a neat way of writing down the necessary conditions for a
solution to (CE).

Theorem 1: Given the Lagrangean function L(x1, . . ., xn, λ) = f(x1, . . ., xn) + λg(x1, . . ., xn) for
problem (CE), any solution values x1*, . . ., xn* for the problem must satisfy the following first-
order conditions:
∂L/∂xi = ∂f/∂xi + λ∂g/∂xi = 0 i = 1, . . ., n (1)

∂L/∂λ = 0 (2)

This result gives us a means of finding x1*, . . ., xn*, as well as the value of the Lagrange
multiplier λ* at the optimum, which has special significance (see below).

Examples:
(i). Consider example (i) above again. We had L(x1, x2, λ) = -(x12 + x22) + λ(1 - x1 - x2), so
conditions (1) and (2) of Theorem 1 become
∂L/∂x1 = -2x1 - λ = 0 (1)
∂L/∂x2 = -2x2 - λ = 0 (2)
∂L/∂λ = 1 - x1 - x2 = 0 (3)
This is a system of three equations in three unknowns (x1, x2, and λ). We can solve the system to
get x1*, x2* and λ* as follows. First, solve equations (1) and (2) for λ to get
-2x1 = λ and -2x2 = λ
Clearly, these equations imply that x1 = x2. Substituting this result into (3) gives
1 - 2x2 = 0 ⇒ x2* = 1/2, and so x1* = 1/2.
Then λ* = -2x1* = -2x2* = -1.
EC2203 QUANTITATIVE METHODS IN ECONOMICS II (1997-1998)

(ii). Consider example (ii) above. We had L(x1, x2, λ) = β 1logex1 + β 2logex2 + λ(m - p1x1 - p2x2),
so conditions (1) and (2) of Theorem 1 become
∂L/∂x1 = β 1/x1 - λp1 = 0 (1)
∂L/∂x2 = β 2/x2 - λp2 = 0 (2)
∂L/∂λ = m - p1x1 - p2x2 = 0 (3)
Again, this is a system of three equations in three unknowns, which can be solved for x1*, x2*
and λ* as follows. First, rearrange (1) and (2) to get
β 1 = λp1x1 and β 2 = λp2x2
Adding these together gives us
β 1 + β 2 = λ(p1x1 + p2x2) = λm (4)
where the second equality in (4) follows from condition (3). Solving for λ* from (4) gives λ* =
(β 1+β 2)/m. Substituting this result into (1) gives
β1 m
x 1* =
β1 + β 2 p1
and substituting the result into (2) gives
β2 m
x 2* =
β1 + β 2 p 2

9.3. Second-order (sufficient) conditions


As in the case of the unconstrained optimisation problem (U) in Lecture 8, first-order necessary
conditions for a solution to (CE) above may not be sufficient. Sufficiency conditions are usually
stated in the form of conditions on the sign definiteness of the Hessian of the Lagrangean
function, evaluated at the point of interest. These conditions get very complicated when there are
multiple constraints, and it is very difficult to apply them. In practice, mathematical economists
often rely on considerations of concavity, or other considerations, to ensure that any stationary
points they find are global maxima or minima. In what follows, we will focus on problems with
two or three choice variables, and one or two constraints, as these are the ones you will most
often come across. However, I will give you the general rule for problems with n choice
variables and m constraints, and I expect you to know it for your exam (you should give it as
part of your answer to any exam questions on this topic). Chiang (Chapter 12) goes into far
greater detail, and you should refer to him for more complicated cases.

The second-order sufficient conditions for a problem with two choice variables and one
constraint are as follows. (Note again that we only need to discuss the case of maximisation,
since a minimisation problem can be converted to a maximisation problem simply by multiplying
the objective function by -1). Suppose the problem is

max L(x1, x2, λ) = f(x1, x2) + λg(x1, x2)


λ , x1 , x 2

By Theorem 1 above, the first-order conditions for a maximum are


∂ L/∂x1 = f1 + λg1 = 0
∂ L/∂x2 = f2 + λg2 = 0
∂L/∂λ = g(x1, x2) = 0
Differentiating each of these equations with respect to x1, x2 and λ, we get the second-order
partials, which can be arranged in matrix form as follows:
EC2203 QUANTITATIVE METHODS IN ECONOMICS II (1997-1998)

 ∂2 L ∂2 L ∂2L 
 
 ∂λ
2
∂λ∂x 1 ∂λ∂x 2 
0 g1 g2 
 ∂ L ∂2 L ∂2L 
2
H =   = g1 f11 + λg 11 f12 + λg 12 
∂x ∂λ ∂x 12 ∂x 1∂x 2  
 21  g 2 f 21 + λg 21 f 22 + λg 22 
 ∂ L ∂2 L ∂ L 
2

 ∂x 2 ∂λ ∂x 2 ∂x 1 ∂x 22 
This matrix of second-order partial derivatives of the Lagrangean function is called the bordered
Hessian. The second-order sufficient condition for a maximum in problems with two choice
variables and one constraint is that the determinant of the above bordered Hessian, | H |, be
strictly positive. As an exercise, you should check the second-order conditions for problem (ii)
studied above. We will work through problem (i) for practice.

Example (i) revisited:


Consider example (i) above again. We had L(x1, x2, λ) = -(x12 + x22) + λ(1 - x1 - x2), so conditions
(1) and (2) of Theorem 1 were
∂L/∂x1 = -2x1 - λ = 0 (1)
∂L/∂x2 = -2x2 - λ = 0 (2)
∂L/∂λ = 1 - x1 - x2 = 0 (3)
The critical values of x1, x2, and λ for the Lagrangean function are x1* = x2* = 1/2 and λ* = -1.
The bordered Hessian in this case is
 ∂2 L ∂2 L ∂2L 
 
 ∂λ
2
∂λ∂x 1 ∂λ∂x 2 
 0 -1 -1 
 ∂ L ∂2 L ∂2L 
2
H =  = -1 −2 0 
∂x ∂λ ∂x 12 ∂x 1∂x 2   
 21  -1 0 −2 
 ∂ L ∂2 L ∂ L 
2

 ∂x 2 ∂λ ∂x 2 ∂x 1 ∂x 22 
and | H | = [(0)(-2)(-2) + (-1)(0)(-1) + (-1)(0)(-1)] - [(-1)(-2)(-1) + (-1)(-1)(-2) + (0)(0)(0)] = 4 > 0,
so the second-order conditions for a maximum are satisfied.

Now consider a problem with three choice variables and one constraint. The problem can be
expressed in terms of the Lagrangean function as follows:

max L(x1, x2, x3, λ) = f(x1, x2, x3) + λg(x1, x2, x3)
λ , x1 , x 2 , x 3

By Theorem 1 above, the first-order conditions for a maximum are


∂L/∂x1 = f1 + λg1 = 0
∂L/∂x2 = f2 + λg2 = 0
∂L/∂x3 = f3 + λg3 = 0
∂L/∂λ = g(x1, x2, x3) = 0
Differentiating each of these equations with respect to x1, x2, x3, and λ, putting the 16 resulting
second-order partials into a matrix, we get the following bordered Hessian:
EC2203 QUANTITATIVE METHODS IN ECONOMICS II (1997-1998)

0 g1 g2 g3 
g f11 + λg 11 f12 + λg 12 f13 + λg 13 
H =  
1

g 2 f 21 + λg 21 f 22 + λg 22 f 23 + λg 23 
 
g 3 f 31 + λg 31 f 32 + λg 32 f 33 + λg 33 
The second-order sufficient condition for a maximum in a problem with three choice variables
and one constraint is that the third principal minor of H be strictly positive, and that the
fourth principal minor (which is just the determinant | H |) be strictly negative.

Example: Find the extreme point of


y = f(x1, x2, x3) = x15x210x315
subject to the constraint that
x1 + x2 + x3 = 6
and confirm that it is a maximum. (Hint: the determinant of the bordered Hessian of the
Lagrangean function for this problem is negative).
Solution: The problem can be made easier to solve by taking the natural logarithm of the
objective function. This does not affect the results in any way. The Lagrangean is
L(x1, x2, x3, λ) = 5logx1 + 10logx2 + 15logx3 + λ(6 - x1 - x2 - x3)
First-order conditions for a maximum are
∂L/∂x1 = 5/x1 - λ = 0 (1)
∂L/∂x2 = 10/x2 - λ = 0 (2)
∂L/∂x3 = 15/x3 - λ = 0 (3)
∂L/∂λ = 6 - x1 - x2 - x3 = 0 (4)
From (1), we get that x1 = 5/λ (5)
From (2) we get that x2 = 10/λ (6)
From (3) we get that x3 = 15/λ (7)
Substituting these results into (4) we get
6 - 5/λ - 10/λ - 15/λ = 0 or
6 - 30/λ = 0 ⇒ λ* = 5
Substituting this result into (5), (6) and (7) gives x1* = 1, x2* = 2, x3* = 3. The value of the
objective function at this point is (1)5(2)10(3)15 = 14, 349, 931. The bordered Hessian is
 0 −1 −1 −1 
 −1 −1 0 0 
H =  
 −1 0 −1 / 2 0 
 
 −1 0 0 −1 / 3
We are told that | H | < 0, so we only have to confirm that the third principal minor is positive.
The third principal minor is
0 −1 −1
−1 −1 0 = [0 + 0 + 0] - [-1 - 1/2] = 3/2 > 0
−1 0 −1 / 2
Since this is strictly positive, the extreme point is a maximum.

Finally, consider a general problem with three choice variables and two constraints. Letting g and
h denote the two constraint functions, the problem can be expressed in terms of the Lagrangean
function as follows:
EC2203 QUANTITATIVE METHODS IN ECONOMICS II (1997-1998)

max L(x1, x2, x3, λ) = f(x1, x2, x3) + λ1g(x1, x2, x3) + λ2h(x1, x2, x3)
λ 1 ,λ 2 , x1 , x 2 , x 3

The first-order conditions for a maximum are


∂L/∂x1 = f1 + λ1g1 + λ2h1 = 0
∂L/∂x2 = f2 + λ1g2 + λ2h2 = 0
∂L/∂x3 = f3 + λ1g3 + λ2h3 = 0
∂L/∂λ1 = g(x1, x2, x3) = 0
∂L/∂λ2 = h(x1, x2, x3) = 0
Differentiating each of these equations with respect to x1, x2, x3, λ1 and λ2, and putting the 25
resulting second-order partials into a matrix, we get the following bordered Hessian:

0 0 g1 g2 g3 
0 0 h1 h2 h3 
 
H = g1 h1 L11 L12 L13 
 
g 2 h2 L 21 L 22 L 23 
g 3 h3 L 31 L 32 L 33 

(For brevity, the notation Lij ≡ ∂2L/∂xi∂xj is used above). The second-order sufficient condition
for a maximum in a problem with three choice variables and two constraints is that the
determinant of H , | H |, be strictly negative. Note that the determinant of H in this case is the
fifth principal minor of H .

GENERAL RULE: PROBLEMS WITH n CHOICE VARIABLES AND m CONSTRAINTS


The general rule, when the maximisation problem involves n choice variables and m
constraints (m < n), is that starting with the minor of order (2m+1), the minors must alternate
in sign, starting with sign (-1)m+1.

You should verify that, in each of the examples discussed above, the second-order sufficient
conditions are in accordance with this general rule.

9.4. Interpretation of Lagrange multipliers


By using a famous result called ‘the Envelope Theorem’ (which we do not have time to study in
this course), it can be shown that each λi* (ie. each Lagrange multiplier at the optimum)
measures the increase in the objective function f made possible by a small relaxation of the
corresponding constraint function. For example, in the case of the constrained utility
maximisation problem faced by the consumer in microeconomic theory, λ* measures the increase
in utility made possible by giving the consumer an additional unit of money (ie. by relaxing the
budget constraint slightly).

9.5. What you must do before the second in-class exam next week (Thursday, 18th
December)
Please attempt the constrained optimisation problems on the attached assignment sheet for
Lecture 9. Learn the procedure for solving problems of this specific type (ie. take the natural
logarithm of the objective function, form the Lagrangean, use the first-order conditions to solve
EC2203 QUANTITATIVE METHODS IN ECONOMICS II (1997-1998)

for x1, x2, x3 in terms of λ, etc.). You will face a very similar problem in your exam. Since we
will not have time to go over these problems in a seminar, handwritten solutions are attached.

(End of Lecture 9)
EC2203 QUANTITATIVE METHODS IN ECONOMICS II (1997-1998)

Assignment for Lecture 9. Constrained optimisation

Question 1
Find the extreme point of
y = f(x1, x2, x3) = x13x22x3
subject to the constraint that
x1 + x2 + x3 = 2
and confirm that it is a maximum. (Hint: the determinant of the bordered Hessian of the
Lagrangean function for this problem is negative).

Question 2
Find the extreme point of
y = f(x1, x2, x3) = x14x28x312
subject to the constraint that
x1 + x2 + x3 = 6
and confirm that it is a maximum. (Hint: the determinant of the bordered Hessian of the
Lagrangean function for this problem is negative).

Question 3
Find the extreme point of
y = f(x1, x2, x3) = x19x26x33
subject to the constraint that
x1 + x2 + x3 = 6
and confirm that it is a maximum. (Hint: the determinant of the bordered Hessian of the
Lagrangean function for this problem is negative).

You might also like