Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
56 views

Successive Quadratic Programming

The document describes the method of successive quadratic programming (SQP) for solving constrained optimization problems. SQP approximates the problem with a quadratic subproblem that is solved to determine a search direction. It then performs a line search along that direction to update the solution. The algorithm iteratively approximates the Lagrangian function with a quadratic model to determine the search direction at each step until convergence is reached.

Uploaded by

sathish22
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
56 views

Successive Quadratic Programming

The document describes the method of successive quadratic programming (SQP) for solving constrained optimization problems. SQP approximates the problem with a quadratic subproblem that is solved to determine a search direction. It then performs a line search along that direction to update the solution. The algorithm iteratively approximates the Lagrangian function with a quadratic model to determine the search direction at each step until convergence is reached.

Uploaded by

sathish22
Copyright
© © All Rights Reserved
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 33

Chapter 6 Method of

Successive Quadratic
Programming
Professor Shi-Shang Jang
National Tsing-Hua University
Chemical Engineering Department
6-1 Quadratic Programming Problems
 Problem:
Min c1 x1  c2 x2    c N x N  h11 x12  h12 x1 x2    h1N x1 x N
 h22 x22    h2 N x2 x N  hNN x N2
s.t. a11 x1    a N 1 x N  b1

aM 1 x1    aMN x N  bM
x0
In compact matrix form :
1
min f ( x)  cx  x' Qx
2
s.t. Ax  b
x0
Classical Constrained Regression
Problem
min e' Ie
s.t.Y  x  e
 Where β is an m×1 vector of regression coefficients
to be estimated, e is an n×1 vector of error variables,
Y is an n×1 observations on the dependent variable
and X is an n×m matrix of observations on the
independent variables. It is clear that the classical
regression problem is a quadratic programming
problem.
Example- A numerical problem
f ( x)  6 x1  2 x12  2 x1 x2  2 x22
s.t. x1  x2  2
x1 , x2  0

compact matrix form


 x1   4  2  x1 
min f  x    6 0   x1 x2  x 
 x2    2 4  2 
 x1 
s.t.1 1   2
 x2 
Example- A numerical problem
c   6 0
 4  2
Q 
 2 4 
A  1 1
b  2
Example-A scientist’s model
 A scientist has observed a certain quantity Q
as a function of t. She has a good reason to
believe that there is a physical law relating t
and Q that takes the form:
Q(t)=asint+bcost+c
She wants to have the “best” possible idea of the
value of the coefficients a,b and c using the
results of her n experiments:
Example-A scientist’s model
 (t1,q1; t2,q2;…, tn,qn). Taking into account
that her experiments are not perfect and also
perhaps that the model is not rigorous
enough, she does not expect to find a perfect
fit. So defines an error term:
Ei=qi-q(ti)
Her idea is to minimize: n
Z  min  ei2

i 1
Example-A scientist’s model
What if a=3;b=2;c=5 with 15
experimental data?
n
min f (a, b, c, e1 ,  , en )   e 2
i
i 1

s.t.q1  a sin t1  b cos t1  c  e1  0


q2  a sin t 2  b cos t 2  c  e2  0

qn  a sin t n  b cos t n  c  en  0
Classical Least Square Fitting
 Problem: Given a set of data (x1,y1; x2,y2;…,
xn,yn). Find a set of parameters ={a, b,
c,…} such that:
 ei2=yi-(axi+bxi2+..)
 And iei2 is minimized
 The solution of the unconstrained case:
 =(XTX)-1XTY=quasi_inverse(X)Y
Example: Curve Fitting
>> x=linspace(0,5);
>> for i=1:100
y(i)=3*x(i)^2+2*x(i)+5;
y(i)=y(i)+randn(1,1);
xx(i,1)=x(i)*x(i);xx(i,2)=x(i);xx(i,3)=1;
end
>> theta=inv(xx'*xx)*xx'*y'

theta =

2.9983
2.1215
4.5839

>> for i=1:100


yy(i)=theta(1)*x(i)^2+theta(2)*x(i)+theta(3);
end
>> plot(x,yy)
Example: Curve Fitting-Continued
100

90

80

70

60

50

40

30

20

10

0
0 0.5 1 1.5 2 2.5 3 3.5 4 4.5 5
Program QUADPROG
 QUADPROG Quadratic programming.
 X=QUADPROG(H,f,A,b) solves the quadratic programming problem:

 min 0.5*x'*H*x + f'*x subject to: A*x <= b
 x

 X=QUADPROG(H,f,A,b,Aeq,beq) solves the problem above while additionally
 satisfying the equality constraints Aeq*x = beq.

 X=QUADPROG(H,f,A,b,Aeq,beq,LB,UB) defines a set of lower and upper
 bounds on the design variables, X, so that the solution is in the
 range LB <= X <= UB. Use empty matrices for LB and UB
 if no bounds exist. Set LB(i) = -Inf if X(i) is unbounded below;
 set UB(i) = Inf if X(i) is unbounded above.
Curve Fitting Using Quadprog
x=linspace(0,5);
for i=1:100
y(i)=3*x(i)^2+2*x(i)+5;
y(i)=y(i)+randn(1);
xx(i,1)=x(i)*x(i);xx(i,2)=x(i);xx(i,3)=1;
end
a=eye(100);
a1=zeros(1,100);
H=[a1; a1; a1; a];
f=zeros(103,1);
HH=[f f f H];
Aeq=[xx eye(100)];beq=y';
A=[];b=[];
X=QUADPROG(HH,f,A,b,Aeq,beq) ;
XX=X';
XX(1)
XX(2)
XX(3)
The Solution Method-Complementary
Pivot Problem
 K-T Condition for a QP:

 
c  xT Q  QT  u  vA  0
Ax  b
x0
ux  0
u0
The Solution Method-Complementary
Pivot Problem
 Find vectors w and z such that

w  Mz  q
w0
z0
w z0
T
The Solution Method-Complementary
Pivot Problem
 The solution of the complementary pivot
problem is equivalent to find a K-T point of
the original QP problem and hence the
solution is sufficiently found since the QP is a
convex objected (quadratic function) and
qualified constrained (linear constraints).
6-2 The Successive QP Approach
 Consider a quadratic function:
q x; x0   f  x0   f  x0  x  x0   x  x0 T  2 f x0 x  x0 
1
2

 The objective is to find d=(x-x0) such that the


following problem can be minimized
min f x t   d  d  f x t  d
T 1 T 2
2
s.t.hk x   h x  d  0
t  t  T
k

gj x    g x   d  0
t
j
t T
Example
min f ( x)  6 x1 x21  x2 x12
s.t.h( x)  x1 x2  2  0
g ( x)  x1  x2  1  0
x0  2 1
The corresponding QP can be found :
 23 47  1 T  3/8  25 / 4
min  d  d   d
 4 4  2  25 / 4 24 
s.t.1 2 d  0
1 1d  2  0
Example
Algorithm
 Step 1: Formulate the QP problem
 Step 2: Solve the QP problem, and set
x(t+1)=x(t)+*d
 Step 3: Check Convergency
Remarks
 Original SQP is subjected to the following
two problems:
(1) The second derivative of the objective
function is generally difficult to obtain.
(2) There is no guarantee that the second
derivative of any function at any point is
positive definite. In case the hessian is not
positive definite, then the QP is failed.
6-3 Quadratic Approximation of the
Lagrangian Function
L( x )  f ( x )  vh( x )  ug ( x )
min f ( x )
s.t.h( x )  0
g ( x)  0
can be approximat ed by :

min f  x  d  d  x L x , u , v d
T 1 T 2
2
s.t.h x   h x  d  0
T

g  x   g  x  d  0
T
Algorithm-The Variable Metric
Method
 Given initial estimates x0,u0,v0, and a symmetric
positive definitive matrix H0
 Step 1: Solve the problem:
min f x  
t  T 1 T t 
d d H d
2
s.t.hx     hx    d  0
t t T

g x     g x    d  0
t t T

 Step 2: Select the step size α along d(t), and set


x(t+1)=x(t)+ α d(t)
Algorithm-The Variable Metric
Method (VMCON)
 Step 3: Check convergence
 Step 4: Update H(t) using the gradient difference
 BSF update formulation:
z  x t 1  x t 
y   x Lx t 1 , u t 1 , v t 1    x Lx t  , u t  , v t  
Define
1 if z T y  0.2 z T H t  z

   0.8z T H t  z
 z T H t  z  z T y otherwise

then
w  y  1   H t  z
Finally
t 1 t  H t  zzT H t  wwT
H H   T
z T H t  z z w
Line Search on the SQP direction
 Given d from QP, instead of using the original
objective function, it is more useful to implement the
following penalty function:
K J 
P( x, R)  f ( x)  R  hk  x    min(0, g j  x 
 k 1 j 1 
K J
 f ( x)    k hk  x     j min(0, g j x 
k 1 j 1

where for the first iteration


 k  vk ,  j  u j
and for all subsequent iterationt,

 kt   max  vkt  , 
1 t 1
 k  vkt    
 2 
 1
  
 jt   max  u jt  ,  jt 1  u jt  
 2 
Numerical Example
min f ( x)  6 x1 x21  x2 x12
s.t.h( x)  x1 x2  2  0
g ( x)  x1  x2  1  0
T
 23 47 
x0  2 1; f     ; h  1 2  ; g  1 1
T T

 4 4 
The first run will take the form by choosing H 1  I
 23 47  1 T 1 0
min  d  d  d
 4 4  2  0 1 
s.t.1 2 d  0
1 1d  2  0
The solution gives :
d 0    4, 2 
T
MATLAB Program - VMCON
% DEFINE GLOBAL VARIABLES
global mu
global sigma
global vv
global uu
global dd
global X0
%
% INITIALIZE THE PROBLEM
H=eye(2);X0=[2;1];uu=0;vv=0;
dx=100;mu=0;sigma=0;iter=0;
%
% THE WHILE LOOP
%
while abs(dx)>0.000001
iter=iter+1;
%
% FIND THE GRADIENT df
%
[df,dlgn]=sqp_Lagrangian(X0);
%
% DEFINE Aq,beq such that Aq*d=beq
% A, b such that Ad<b
MATLAB Program – VMCON-Continued
A=[-1 -1];b=(X0(1)+X0(2)-1);
Aeq=[X0(2) X0(1)];
beq=(X0(1)*X0(2)-2);
%
% SOLVE THE SUB-QUADRATIC PROBLEM FOR d
%
[ss,FVAL,EXITFLAG,OUTPUT,LAMBDA]=QUADPROG(H,df,A,b,Aeq,beq);
dd=ss';
%
% GET THE LARAGNGE=MULTIPLIERS
%
uu=LAMBDA.ineqlin;
vv=LAMBDA.eqlin;
%
% PREPARE THE LINE SEARCH
%
mu=max(abs(vv),0.5*(mu+abs(vv)));
sigma=max(abs(uu),0.5*(sigma+abs(uu)));
a0=0.1;
%
% LINE SEARCH ALONG dd WITH THE PENALTY FUNCTION AS THE OBJ
%
alopt = FMINSEARCH('sqp_penalty',a0);
MATLAB Program – VMCON-Continued
%
% GET THE NEW POINT
%
X=X0+alopt*dd';
%
% PREPARE TO UPDATE THE APPROXIMATE HESSIAN
%
z=X-X0;
[df,dlgn_1]=sqp_Lagrangian(X0);
[df,dlgn_2]=sqp_Lagrangian(X);
y=dlgn_2-dlgn_1;
zz=z'*H*z;
zzp=0.2*zz;
zy=z'*y;
if(zy>zzp)
theta=1;
else
theta=0.8*z'*H*z/(zz-zy);
end
w=theta*y+(1-theta)*H*z;
%
% UPDATE THE APPROXIMATE HESSIAN
%
H=H-(H*z*z'*H)/zz+w*w'/(z'*w);
MATLAB Program – VMCON-Continued

%
% CALCULATE THE OBJ AND EQUALITY CONSTRAINT AT THE NEW POINT
%
hh=X(1)*X(2)-2;
ddx=X-X0;
obj=sqp_obj(X);
%
% UPDATE THE NEW POINT
%
X0=X;
%
% CHECK THE CONVERGENCY
%
dx=sqrt(ddx(1)^2+ddx(2)^2);
end
%
% OUTPUT THE RESULT
%
iter
X
obj
hh
MATLAB Program Using Fmincon
> help fmincon

FMINCON Finds a constrained minimum of a function of


several variables.
FMINCON solves problems of the form:
min F(X) subject to: A*X <= B, Aeq*X = Beq (linear
constraints)
X C(X) <= 0, Ceq(X) = 0 (nonlinear
constraints)
LB <= X <= UB
MATLAB Program Using Fmincon-
Continued

X=FMINCON(FUN,X0,A,B,Aeq,Beq,LB,UB,NON
LCON) subjects the minimization to the
constraints defined in NONLCON. The function
NONLCON accepts X and returns
the vectors C and Ceq, representing the nonlinear
inequalities and equalities
respectively. FMINCON minimizes FUN such that
C(X)<=0 and Ceq(X)=0.
MATLAB Program Using Fmincon-
Example
X0=[2;1];
A=[-1 -1];
B=1;
Aeq=[];Beq=[];LB=[];UB=[];
X=FMINCON('sqp',X0,A,B,Aeq,Beq,LB,UB,'sqp_nlcon')

function [c,ceq]=sqp_nlcon(x)
c=[];
ceq=x(1)*x(2)-2;

You might also like