Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

e Book Portfolio Opti Miz w 25

Download as pdf or txt
Download as pdf or txt
You are on page 1of 18

Portfolio Optimization Models:

Formulations and (Robust) Numerical Solutions


CO372, Winter 2025

Henry Wolkowicz

Dept. of Combinatorics and Optimization


University of Waterloo

Last updated: 22:55, Tuesday 7th January, 2025


Contents

Preface/Syllabus i
0.1 References and Related Books . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
0.1.1 Finance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
0.1.2 Optimization Background . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
0.2 Outline of Material Covered; Marking Scheme; Office Hours . . . . . . . . . . . . . . . . . . . ii
0.2.1 Outline . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . ii
0.2.2 Marking . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii
0.2.3 Office Hours . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . iii

1 Introduction 1
1.1 The (Pareto) Efficient Frontier Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2
1.2 Types of (Continuous) Optimization Problems . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.1 Unconstrained Nonlinear Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 3
1.2.2 Constrained Continuous Minimization . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.3 Special Cases . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4
1.2.4 ***Tues. Jan. 9; end Lecture 1 ********* . . . . . . . . . . . . . . . . 5
1.2.5 Gradients, Jacobians, Hessians . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5
1.3 First Target Investment Portfolio Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.1 Estimating Variance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7
1.3.2 References . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

Index 12

Bibliography 12

List of Tables

2
Preface/Syllabus

Euler (1707-1783) wrote:


“ nothing at all takes place in the universe in which some rule of maximum or
minimum does not appear”.
(e.g., [4, Pg 68], or with this link )

Mathematical models play an important role in approximating and solving many real-life problems. This
is particularly true for optimization models and financial decision making of asset allocation type to risk
management. This area continues to advance rapidly in efficiency and robustness.
Suppose that we have n financial instruments and we want to find amounts xi , i = 1, . . . , n, for purchasing.
These problems in finance typically have two objectives:

• maximize the return,

• minimize the risk.

There are often additional constraints for these variables. This course covers the modelling and solution
techniques for such decisions.
A short summary of this course is from the UWFlow link: We cover computational optimization method-
ologies underlying portfolio problems in finance. We include: the background on computational linear
algebra; determining derivatives; quadratic and general nonlinear optimization. We cover the efficient fron-
tier problem and applications of optimization in finance such as volatility surface determination and global
minimization for value-at-risk
Throughout the text we include results as exercises. Some of these are straightforward while others are
hard and often come from theorems in research papers. These notes and the homeworks (problem sets) are
available VIA LEARN.

Request:
It is important to attend class and to take notes during class, as the solutions to assignments and many
other details are not included elsewhere. Please refer back to these notes on LEARN, as they are being
changed/updated/expanded regularly. Any and all comments about them are greatly appreciated.

Acknowledgement:
These notes arise from the lectures for the course CO372, Portfolio Optimization Models, at University of
Waterloo. The material in these notes has benefited greatly from the notes and books of many other lecturers
and colleages. In particular, I acknowledge the the notes of Professors Stephen Vavasis and Thomas Coleman.
Their work is at times included verbatim and at times modified according to this author’s preferences. Other
material has benefited from the references provided in the reference Section 0.1.

i
ii PREFACE/SYLLABUS

0.1 References and Related Books


0.1.1 Finance
1. Portfolio Optimization, Michael J. Best, CRC Press, Mar. 9, 2010, [1].

2. Optimization Methods in Finance (Mathematics, Finance and Risk), 1st Edition by Gerard Cornuejols,
Reha Tütüncü, [3].

3. Portfolio Optimization link to online lecture notes by Prof. Anna Nagurney.

4. Multi-Period Trading via Convex Optimization, link to online paper by: Stephen Boyd, Enzo Busseti,
Steven Diamond, Ronald N. Kahn, Kwangmoo Koh, Peter Nystrup, Jan Speth.

0.1.2 Optimization Background


1. Boyd, Vandenberghe: Convex Optimization [2] and online at: stanford.edu/˜boyd/cvxbook/

2. Peressini, Sullivan, Uhl: The Mathematics of Nonlinear Programming [9]

Some Related Software


1. cvxr.com/cvx/, CVX: Matlab Software for Disciplined Convex Programming

0.2 Outline of Material Covered; Marking Scheme; Office Hours


0.2.1 Outline
Lectures at UofW start Monday Jan. 8, 2025.
W25 CO372 held Tues.-Thurs. from 1:00PM to 2:20PM in MC4059.

Jan 7-9 Risk and return


Efficient frontier; opt. models; differentiation
Target investment portfolio problem; estimating variance
Jan 14-16 Linear algebra review
Solving linear systems; factorizations (LU, QR, SVD, Cholesky)
Symmetric positive semidefinite; condition numbers
Jan 21-23 Basic Optimization
Quadratic function/program minimization; convexity
Jan 28-30 Efficient frontier
Portfolio optimization: minimum variance, maximum expected return,
Feb 4 - 6 parametric Mean standard deviation space; Sharpe point of efficient frontier;
Capital asset pricing market line
Feb 11-13 Inequality constraints
MIDTERM on Feb 1329th; (in ????? see odyssey)
Feb 17-21 reading week; NO LECTURES
Feb 25-27 Quadratic programming;
Mar 4 - 6 Quadratic programming
Mar 11-13 Parametric programming
Mar 18-20 Nonlinear optimization
Mar 25-27 Factors
Apr 1-3 Data mining in financial markets
Lectures at UofW end Monday Apr. 4, 2025.
0.2. OUTLINE OF MATERIAL COVERED; MARKING SCHEME; OFFICE HOURS iii

0.2.2 Marking
Assignments: total 25% There will be FIVE problem sets/homework assignments throughout the term.
The set with the lowest grade will be dropped from the final course grade.
Suggested due dates are 10 days after the handout of the assignment.
We are following the universal design and so allowing extensions on each assignment.
(Late assignments after the extended due date are given a mark of zero.)
A sixth problem set will be handed out, but it is not to be handed in.
Midterm: total 25% In class Thurs. Feb. 13, 2025.
A missed midterm will have the mark added to the final exam.
One 8.5-by-11 sheet of paper with handwritten notes is allowed.
Final: total 50% data TBA
One 8.5-by-11 sheet of paper with handwritten notes is allowed.
Students are expected to write up problem sets by themselves. They must not hand in homework that repre-
sents somebody else’s ideas entirely. Students should do the coding for programming questions by themselves;
no program code should be shared. Late or missed assignments for any reason are counted as zero.

0.2.3 Office Hours


• Instructor Henry Wolkowicz: Friday 1:00-2:00PM in MC6312

• TA: Benedetto, Proenca Nathan and Jung, Woosuk ????location ??? office hours???
iv PREFACE/SYLLABUS
Chapter 1

Introduction

We illustrate the use of portfolio optimization models, tools, and techniques to assist in complex decision-
making. These models also serve for building blocks for many other system-wide models. They can be used
to aid decision-making for:
• decision-makers in organizations;

• alternative and conflicting criteria;

• constraints on resources;

• impact of decisions;

• results from increasing risk and uncertainty;

• importance of dynamics and responses to evolving events.


Modern finance uses many sophisticated tools from mathematics, and in particular from optimization.
This began with the work of Markowitz for portfolio selections in the early 1950s, see e.g., [5–7]. Markowitz
won the 1990 Nobel prize in Economics.1 Further work on option pricing formulas were developed by Black,
Scholes, and Merton in the 1960s and 1970s. Scholes and Merton won the 1997 Nobel prize in Economics.
The main problem of portfolio optimization deals with two conflicting objectives:
• maximizing the return ,

• minimizing the risk ,


P
i.e., given a set of n financial instruments, find the amounts xi , i xi = 1, of each to buy to optimally
balance the rate of return and risk. That these two diverse objectives can be treated together was important,
revolutionary.
Example 1.0.1. Some optimization problems in finance:
• volatility surface problem

given recent history of the stock, predict the variance (uncertainty)

• computing hedging sensitivities

• value at risk (minimize near worst case losses; enhance: robustness, reliability, stability)

• index tracking

• efficient frontier (balance portfolio risk and return)


1 This one of our main themes; see also one of the main references for this course, the textbook [1].

1
2 CHAPTER 1. INTRODUCTION

1.1 The (Pareto) Efficient Frontier Problem


Suppose that we have n risky assets and we want to balance the risk and return in our investment. Let
x = (xi ) ∈ Rn be the vector for buying and shorting the assets. Our portfolio x therefore satisfies
n
X
xi = 1. (budget constraint)
i=1

Suppose that the expected returns (random variables) are given in the vector r = (ri ) ∈ Rn . Then, to
obtain an efficient portfolio, for given desired return r̄ or given desired risk R̄, we could solve either of two
problems: to minimize the risk given a fixed minimum expected return, or maximize return given a fixed
maximum risk, respectively,

min{risk(x) : E(x) ≥ r̄}, max{E(x) : risk(x) ≤ R̄}. (1.1.1)


x x

We can now plot the values of expected return versus the risk. We then get the efficient frontier, EF , see
Figure 1.1.2

Remark 1.1.1 (efficient frontier). Below, we will see how a Lagrange multiplier argument can be used to
explain why the two problems in (1.1.1) result in the same figure for the efficient frontier. (You can assume
that risk is given by a positive definite quadratic form (homogeneous quadratic function) xT Hx.)

Figure 1.1: efficient frontier

Question 1.1.2. A list of questions on risk/return . . . follows:

1. how to measure risk (variance, covariance, Gaussian or non-Gaussian); typically we have dependence
among industries and a covariance matrix M ∈ Sn+ , positive semidefinite, also denoted as M ⪰ 0

2. how to measure expected returns (Gaussian or non-Gaussian)

3. how to formulate the optimization models

4. how to solve for the efficient frontier, EF

5. adding additional constraints e.g.:

• no shorting (x ≥ 0)
2 Theconstraints are inequality constraints but hold as equality at the optimal solutions, i.e., the constraints are active set
constraints, see Exercise ?? below.
1.2. TYPES OF (CONTINUOUS) OPTIMIZATION PROBLEMS 3

• upper bounds (x ≤ u)
Pk
• sector requirements, e.g., t=1 xit ≤ 3
• which point to choose on EF
• solution sensitivity to problem data
• multiple time periods

1.2 Types of (Continuous) Optimization Problems


We mainly concentrate on variables taking values on the complete real line, x ∈ Rn , rather than discrete
optimization problems that take discrete values, e.g., integer values, x ∈ Nn . Though at times we do need
discrete values, e.g., we may need to restrict the number of stocks one chooses from a large number of choices.

1.2.1 Unconstrained Nonlinear Minimization


Let U ⊆ Rn be an open set and let f : U → R be continuously differentiable. The unconstrained minimization
problem is
p∗unc = min f (x).
x∈U

The global optimum, if it exists, is a point x∗ ∈ U such that p∗unc = f (x∗ ). We denote this as

x∗ ∈ arg minx∈U f (x).

Often we have U = Rn . But sometimes we have functions that are not defined on all of Rn , e.g., functions
with log. Therefore, adding the constraint U is often used to have the problem well defined. The set U is
often used to explicitly define the domain of f , dom(f ), the points where f is finite

dom(f ) = {x ∈ Rn : −∞ < f (x) < ∞}.

Definition 1.2.1. Let U ⊆ Rn be an open set and let f : U → R. The point x̄ ∈ U is a local minimum of f
over U if there exists a ball Bδ (x̄)3 such that x ∈ U ∩ Bδ (x̄) =⇒ f (x̄) ≤ f (x).

Algorithms typically start with an initial estimate x0 ∈ U and find a sequence of improving points in
U , x1 , x2 , . . . , xk , . . . , with f (xi+1 ) ≤ f (xi ), i = 1, 2, . . ., such that limk→∞ xk = x∗ . We then hope that x∗
is a local minimum or global minimum. Finding a global minimum is in general a N P -hard problem.4 An
example of a local (on the left) and a global (on the right) minima are in Figure 1.2.1.

Question 1.2.2. Algorithms require:


5
1. efficiency (speed, accuracy)

2. convergence guarantees

3. stopping conditions
3 B (x) = {x ∈ Rn : ∥x − x̄∥ ≤ δ}
δ
4 We will not discuss the details of complexity theory such as polynomial time and NP-hard problems. We just consider this
class of problems hard and they cannot be solved in polynomial time unless the open question that P = N P is true.
5 “...in fact, the great watershed in optimization isn’t between linearity and nonlinearity, but convexity and

nonconvexity.” - R. Tyrrell Rockafellar, in SIAM Review, 1993


(see e.g., this link - Backgrounder: Linear Programming and ’The Great Watershed’ – Convex and Conic Optimization.
Convex problems can essentially be solved in polynomial time.
4 CHAPTER 1. INTRODUCTION
.

Figure 1.2: local/global minima

1.2.2 Constrained Continuous Minimization


As above, let U ⊆ Rn be an open set. Define the continuously differentiable functions6

f : U → R, h : U → R+ , g : U → Rm ,

the t × n matrix A, and the vectors u, ℓ ∈ Rn , b ∈ Rt . An example of a general nonlinear program, NLP is

min f (x)
s.t. h(x) = 0
(NLP) g(x) ≤ 0
Ax = b
ℓ ≤ x ≤ u, x ∈ U.

There are exterior point algorithms, where points are generated outside the feasible set. Also, there are
interior point algorithms, where points are generated inside the feasible set, are feasible.

1.2.3 Special Cases


1. Let F : Rn → Rm , m > n,
f1 (x)
 
 f2 (x) 
F (x) = 
 ... ,

fm (x)
and
m
1 X
f (x) = ∥F (x)∥2 = fi2 (x).
2 i=1

The nonlinear least squares, NLLS (nonlinear regression) problem is

(NLLS) min f (x). (1.2.1)


x

NLLS is an extremely important/useful problem in modelling finance and other problems.

Exercise 1.2.3. Consider the NLLS problem in (1.2.1). Find the Jacobian of F and use it to find
the gradient and Hessian (defined below in Section 1.2.5) for the function f .
6 In these notes we do not consider nondifferentiable problems. Therefore we assume sufficient smoothness, differentiability.
1.2. TYPES OF (CONTINUOUS) OPTIMIZATION PROBLEMS 5

2. Consider the quadratic function q : Rn → R


1
q(x) = cT x + xT Hx,
2
where c ∈ Rn and H ∈ Sn , the space of n × n real symmetric matrices. In most applications, H ∈ Sn+
the (cone) of symmetric positive semi-definite matrices. Recall that the eigenvalues of a symmetric
matrix are all real and H ∈ Sn+ if, and only if, all the eigenvalues of H are nonnegative.

Exercise 1.2.4 (sufficient optimality). Consider the quadratic function minimization p∗ := minx q(x) =
cT x + 12 xT Hx, where x ∈ Rn , c ∈ Rn , H ∈ Sn+ , i.e., positive semi-definite. What is the necessary and
sufficient condition for p∗ being finite. Now, suppose that p∗ is finite. Show that x being a critical
point is a sufficient condition for x being a global minimum. (Use the Taylor expansion. The required
definitions are given below.)

3. If the objective and all constraint functions are linear we obtain the linear programming, LP problem.
The feasible set is unchanged. Recall that a feasible x ∈ F is an extreme point if

x ∈ (y, z), y, z ∈ F =⇒ x = y = z,

where (y, z) is the open interval connecting the points y, z.

Exercise 1.2.5 (LP ). For anaLP, the geometry of the feasible set and the level curves of the objective
function imply that there is an optimal solution at an extreme point of the feasible set. (The geometry
of the feasible set and definition of extreme point are important.)

1.2.4 ***Tues. Jan. 9; end Lecture 1 *********

1.2.5 Gradients, Jacobians, Hessians


Gradients
Let f : Rn → R be continuously differentiable. The gradient of f , ∇f (x) is the column vector of partial
derivatives  ∂f (x) 
∂x
1
 ∂f (x) 
 ∈ Rn .
 ∂x2 
∇f (x) =  (1.2.2)
 ... 
∂f (x)
∂xn

Exercise 1.2.6. Note that an alternative (equivalent) definition for the gradient of f at x is that it is the
unique vector v that satisfies the first order Taylor series

f (x + ∆x) = f (x) + ⟨v, ∆x⟩ + o(∥∆x∥).7 (1.2.3)

Here we use the dot product, ⟨v, w⟩ = v T w.

Remark 1.2.7. The following describes hedging properties.

1. hedging methods are usually based on the gradient, ∇f (x);


7 Recall that the little o notation here is a function that converges ot 0 faster that ∆ → 0.
6 CHAPTER 1. INTRODUCTION

2. ∇f (x) is the direction of maximum increase at x, i.e., we can use Taylor’s Theorem8 , see Theorem ??
page ?? below, and the Cauchy-Schwartz inequality, to show that
1
∇f (x) = argmax{dT ∇f (x) : ∥d∥ = 1};
∇f (x)
3. x̄ is a local minimum =⇒ ∇f (x̄) = 0, see Theorem ?? below.
Question 1.2.8. How does one compute the Hessian approximately/efficiently? This is considered later in
the course.

Jacobians

f1 (x)

 f2 (x) 
Let F : Rn → Rm , F (x) =  . , be differentiable. The Jacobian, J(x) is the m × n matrix of partial
 
 .. 
fm (x)
derivatives. Equivalently, each row is the gradient transposed
∇f1 (x)T ∇f1 (x)T ∆x
   
 ∇f2 (x)T  T
 
∂fi  , J(x)∆x =  ∇f2 (x) ∆x  .
 
J(x) = = (1.2.4)
∂xj m×n  ···   ··· 
T T
∇fm (x) ∇fm (x) ∆x

Remark 1.2.9. The following comments follow from the definitions.


1. The first order Taylor approximation for f : Rn → R is f (x + ∆x) ≈ f (x) + ⟨∇f (x), ∆x⟩. Here
⟨∇f (x), ∆x⟩ = ∇f (x)T ∆x, i.e., the gradient is a column vector, see (1.2.2), but it acts on ∆x as a
row vector times ∆x. The first order approximation of F : Rn → Rm is F (x + ∆x) ≈ F (x) + J(x)∆x.
Thus we see that the rows of the Jacobian J are gradients transposed.
2. Computing J(x) is important for methods that solve: F (x) = 0, m = n, or min f (x) := 21 ∥F (x)∥22 , m ̸=
n. Note that we can use the chain rule to show
Xm
∇f (x) = fi (x)∇fi (x) = J(x)T F (x),
i=1

where the latter equality comes from exploiting the fact that a matrix vector multiplication is a linear
combination of the columns. In addition,
m
X m
X m
X
∇2 f (x) = ∇fi (x)∇fi (x)T + fi (x)∇2 fi (x) = J(x)T J(x) + fi (x)∇2 fi (x),
i=1 i=1 i=1
2
where ∇ f denotes the Hessian, the matrix of second partials, see Section 1.2.5 below. Note that to find
the Hessian we are applying the product rule to Gi (x) := fi (x)∇fi (x), Gi : Rn → Rn . The first term is
found by treating ∇fi (x) as constant. Therefore we need the Jacobian of fi (x)v, where v = ∇fi (x) is
treated as a constant. This is the Jacobian i.e., exactly v∇fi (x)T .
3. When m = 1 we get the transpose of the gradient.
4. How does one approximately compute J(x) efficiently? (To be discussed.)

Hessians
Let f : Rn → R be twice continuously differentiable, denoted f ∈ C 2 . The matrix of second derivatives is
the Hessian, ∇2 f (x). Note that ∇2 f (x) ∈ Sn , is symmetric.9
8 Recall
(1.2.3) that for f : Rn → R, f smooth enough, we have f (x + ∆x) = f (x) + ⟨∇f (x), ∆x⟩ + o(∥∆x∥)
9 As
for the first order Taylor series (1.2.3) that provides a local linear approximation for f , the second order Taylor series
provides a local quadratic approximation f (x + ∆x) = f (x) + ⟨∇f (x), ∆x⟩ + 21 ⟨∆x, ∇2 f (x)∆x⟩.
1.3. FIRST TARGET INVESTMENT PORTFOLIO PROBLEM 7

Remark 1.2.10. Hessians arise in optimization problems.


1. Computing/approximating the Hessian, H(x), is used in methods to solve min f (x).

2. x∗ local/global minimum (and f ∈ C 2 on the open set U )


=⇒
Hessian is in Sn+ , the set of positive semidefinite matrices .

1.3 First Target Investment Portfolio Problem


Suppose that we have n securities (stocks, bonds, . . . ), S1 , S2 , . . . , Sn with random returns ri , i = 1, . . . , n,
respectively, to invest for a period of time T . We have the
n
X
wealth constraint, xi = 1,
i=1

where xi is the investment in security i, i = 1, . . . , n. Each choice of x defines a portfolio. The estimated
expected return for a $1 investment in Si is r̄i (at time T ), e.g., for r̄i = .08, $1.00 → $1.08. (We see below
how to estimate the expected value r̄i of the random variable ri .) The estimated variance is σi2 , and the
correlation coefficients ρij are also assumed known.
The expected return for the total investment vector x is
 
r̄1
T  .. 
E[x] = r̄ x, r̄ =  .  .
r̄n
Therefore, one objective is to maximize the expected return. The measure of risk of the portfolio is usually
taken to be the variance
X
Risk(x) = Var[rT x] = E[(r̄T x − rT x)2 ] = ρij σi σj xi xj = xT Qx,
ij

thus defining the covariance matrix Q, with ρii = 1, ∀i.


In addition, one can have sector/subset constraints. Suppose that S1 , . . . , St are technology stocks. Then
a sample constraint might be
t
X
xi relation .25, where relation is one of: =, ≤, ≥,
i=1

i.e., (exactly, at most, or at least), respectively, a quarter of the investment goes into technology stocks.
And, suppose that St+1 , . . . , St+s are energy stocks. Then another sample constraint might be
t+s
X
xi relation .2, where relation is one of: =, ≤, ≥,
i=t+1

i.e., (exactly, at most, or at least), respectively, a fifth of the investment goes into energy stocks.

1.3.1 Estimating Variance


Suppose that we have m scenarios/samples of returns for each return variable ri , i = 1, . . . , n,
 i
r1
 r2i  m
1 X i
ri =  .  ∈ Rn , r̄ = r ∈ Rn .
 
 ..  m i=1
rni
8 CHAPTER 1. INTRODUCTION

We can put them into the m × n matrix R and the mean expected returns in matrix R̄:
 
(r1 )T
 1
r1 r21 . . . rn1
  ...   
  r̄1 r̄2 ... r̄n
2 2
R = r1 r2 . . . rn 2 i T
=  (r )  , R̄ = r̄1 .
r̄2 ... r̄n 
    
r1m r2m . . . rnm m×n  . . . 
 
r̄1 r̄2 ... r̄m m×n
(rm )T m×n

Then to estimate the risk/variance we get

Var[x] = E[(r̄T x − rT x)2 ]


1
Pm T i T 2
≈ m Pi=1 [(r̄ x − (r ) x) ]
1 m T i T 2
= m i=1 [(r̄ − (r ) )x) ]
1 2
= m ∥(R̄ − R)x∥2 ,

where we note that R̄ = er̄T , e is the vector of ones.


We now let
A1 = (R̄ − R).
Recall that the inner-product (or dot product) and Euclidean norm (or ℓ2 norm) in Rn are, respectively,
X sX
p
⟨x, y⟩ = xT y = xi yi , ∥x∥ = ∥x∥2 = ⟨x, x⟩ = x2i .
i i

A feasible portfolio is efficient if it has the maximal expected return among all portfolios with at most a
given target variance. Alternatively, it is efficient if it has the minimum variance among all portfolios with
at least a given target expected return. The set of efficient portfolios is called the efficient frontier .
Then minimizing risk subject to a target expected return rp and the sector constraints is
1
minx 2
2 ∥A1 x∥2 (= 21 ∥F (x)∥2 = 21 xT AT1 A1 x = 12 xT Qx)
T
s.t. e x=1
Pt
Pt+si=1 xi =, ≤, ≥ .25 (1.3.1)
i=t+1 xi =, ≤, ≥ .2
r̄T x = or ≥ rp ,

where we have emphasized that the we could have equality, at most, at least, constraints. The above
implicitly defines F, Q.
Exercise 1.3.1. In the case of just equality constraints, rewrite problem (1.3.1) in the form of minimizing
with a single constraint A2 x = b. Define carefully what A2 , b are.
How would the problem change if a no shorting constraint was added?
Exercise 1.3.2. For problem (1.3.1) what difference would you expect from using equality and inequality
≥ rp for the estimated expected return constraint. Would you expect a change in the optimal solution? Why?
(BONUS: What if all three expected return constraints were changed from = to ≥?)
Remark 1.3.3. Note that the objective function in (1.3.1) is a quadratic ∥A1 x∥2 = xT AT1 A1 x = xT Hx,
where H is symmetric, positive semidefinite. The above illustrates the importance of linear algebra in mod-
elling and working with the portfolio models. Recall that for a quadratic function q(x) = 12 xT Hx + g T x, H =
H T ⪰ 0, x̄ a local minimum implies the second order necessary optimality conditions: ∇q(x̄) = 0, ∇2 q(x̄) =
H ⪰ 0.
Exercise 1.3.4. In fact, the following are equivalent for a quadratic function
1 T
q(x) = x Hx + g T x, g ∈ Rn , H ∈ Sn :
2
1.3. FIRST TARGET INVESTMENT PORTFOLIO PROBLEM 9

• x̄ is a local minimum

• x̄ is a global minimum

• q(x) is bounded below

• ∇q(x̄) = 0, ∇2 q(x̄) = H ⪰ 0, g ∈ range(H).

Exercise 1.3.5 (minimizing risk). Suppose that there are 5 stocks where the first two are technology stocks
and the last two are real estate stocks. Scenarios/samples of returns follow:
 
0.3486 0.4133 0.0251 0.2370 0.6404
0.0024 0.7346 0.9619 0.4633 0.0074
 
0.4755 0.8461 0.2523 0.8453 0.3135
 
 
0.1808 0.1173 0.5731 0.9971 0.3559
 
0.9480 0.7666 0.6546 0.7951 0.5124
 
0.9597 0.1039 0.5144 0.7837 0.8571
 
0.5050 0.4942 0.5087 0.8373 0.8269
 
0.4723 0.2503 0.1319 0.4870 0.6043
 
 
0.1234 0.0150 0.1747 0.4092 0.2195
 
0.9012 0.8101 0.2859 0.5669 0.2886
R= 0.6310 0.0293 0.3013 0.5978 0.4783

 
0.3699 0.0608 0.7016 0.7624 0.8179
 
0.5514 0.0626 0.7349 0.5296 0.7356
 
 
0.4743 0.0660 0.1014 0.3222 0.1447
 
0.8676 0.2211 0.1591 0.9794 0.9312
 
0.9271 0.1842 0.5918 0.1102 0.4538
 
0.6420 0.4931 0.3504 0.4496 0.8862
 
0.5628 0.3657 0.1346 0.9531 0.0110
 
 
0.6715 0.3420 0.0465 0.1261 0.2356
0.4833 0.9535 0.4653 0.1806 0.9052

An investor wants a return of at least .3, while investing at least .2 in technology and at most .3 in real
estate, and also while minimizing risk.
In addition, using the Mosek solver or Gurobi, can you guarantee that at most one of the technology
stocks is invested in and there is no shorting for technology stocks? Can you comment on the results?
Please use CVX or the MATLAB command quadprog to find the optimal portfolio x ∈ R5 . CVX
provides many examples in their documentation. Here is a sample program where n, A1, rp are previously
defined:
cvx_begin
variable x ( n )
minimize norm ( A1 * x )
subject to
sum ( x ) == 1
rbar * x >= rp
cvx_end
Now redo the problem with a risk-free security added with return .08. Hint: Add a constant column to R.

1.3.2 References
[8]
10 CHAPTER 1. INTRODUCTION
Index

A1 = (R̄ − R), 8 Hessian, ∇2 f (x), 6


J(x), Jacobian, 6
M ⪰ 0, 2 Jacobian, J(x), 6
M ∈ Sn+ , 2
linear programming, LP, 5
N P -hard, 3
local minimum, 3
E[x], expected return, 7
local minimum of f over U , 3
Var[x], variance, 7
arg min, 3 main problem of portfolio optimization, 1
dom(f ), domain of f , 3 measure of risk, 7
⟨v, w⟩ = v T w, dot product, 5
⟨x, y⟩, 8 nonlinear least squares, NLLS , 4
∇2 f (x), Hessian, 6 nonlinear program, NLP , 4
∇f (x), gradient of f , 5
ρij , correlation coefficients, 7 open set, 3, 4
e, vector of ones, 8 optimal value, p∗unc , 3
f ∈ C 2, 6
positive semidefinite, 2
p∗unc , optimal value, 3
EF, efficient frontier, 2 quadratic function, 5
LP, linear programming, 5
NLLS, nonlinear least squares, 4 risk-free, 9
NLP, nonlinear program, 4
unconstrained minimization problem, 3
active set, 2
variance, Var[x], 7
correlation coefficients, ρij , 7 vector of ones, e, 8
covariance matrix, 7 Pn
wealth constraint, i=1 xi = 1, 7
domain of f , dom(f ), 3
dot product, ⟨v, w⟩ = v T w, 5

efficient, 2, 8
efficient frontier, 8
efficient frontier, EF, 2
expected return, E[x], 7
extreme point, 5

feasible set, 4

global minimum, 3
global optimum, 3
gradient of f , ∇f (x), 5

11
Bibliography

[1] Michael J. Best. Portfolio optimization. Chapman & Hall/CRC Finance Series. CRC Press, Boca Raton,
FL, 2010. With 1 CD-ROM (Windows, Macintosh and UNIX).

[2] S. Boyd and L. Vandenberghe. Convex Optimization. Cambridge University Press, Cambridge, 2004.

[3] G. Cornuejols and R. Tütüncü. Optimization methods in finance. Mathematics, Finance and Risk.
Cambridge University Press, Cambridge, 2007.

[4] J.V. Grabiner. “It’s all for the best”: optimization in the history of science. J. Humanist. Math.,
11(1):54–80, 2021.

[5] H. Markowitz. Portfolio selection*. The Journal of Finance, 7(1):77–91, 1952.

[6] H.M. Markowitz. Mean-variance analysis in portfolio choice and capital markets. Basil Blackwell, Oxford,
1987.

[7] H.M. Markowitz. Harry Markowitz. Selected works, volume 1 of Nobel Laureate Series. World Scientific
Publishing Co. Pte. Ltd., Hackensack, NJ, 2008.

[8] A. Nagurney and Q. Qiang. Fragile Networks: Identifying Vulnerabilities and Synergies in an Uncertain
World. John Wiley, 2009.

[9] A.L. Peressini, F.E. Sullivan, and J.J. Uhl, Jr. The Mathematics of Nonlinear Programming. Springer-
Verlag, New York, 1988.

12

You might also like