Matlab Tutorial2
Matlab Tutorial2
Optimization Toolbox
1
[X,FVAL]=FMINUNC(FUN,X0,...) returns the value of the objective
function FUN at the solution X.
Examples
FUN can be specified using @:
X = fminunc(@myfun,2)
function F = myfun(x)
2
F = sin(x) + 3;
If FUN is parameterized, you can use anonymous functions to capture the problem-
dependent parameters. Suppose you want to minimize the objective given in the
function MYFUN, which is parameterized by its second argument A. Here MYFUN is
an M-file function such as
3
Reference page in Help browser
doc fminunc
function fval=myfunc(x)
fval = (x(1)-1)^4+(x(2)-1)^2;
return
x=
0.9955 1.0000
fval =
4.0847e-010
>>
최적 해는 x이고 그때 최적 값은 fval이다.
4
OPTIONS is passed to the optimization function). It is sufficient to type
only the leading characters that uniquely identify the parameter. Case is
ignored for parameter names.
NOTE: For values that are strings, correct case and the complete string
are required; if an invalid string is provided, the default is used.
5
[ positive integer ]
MaxIter - Maximum number of iterations allowed [ positive scalar ]
TolFun - Termination tolerance on the function value [ positive scalar ]
TolX - Termination tolerance on X [ positive scalar ]
FunValCheck - Check for invalid values, such as NaN or complex, from
user-supplied functions [ {off} | on ]
OutputFcn - Name of installable output function [ function ]
This output function is called by the solver after each
iteration.
Examples
To create options with the default options for fzero
options = optimset('fzero');
To create an options structure with TolFun equal to 1e-3
options = optimset('TolFun',1e-3);
To change the Display value of options to 'iter'
options = optimset(options,'Display','iter');
6
x=
0.9997 1.0000
fval =
5.5791e-015
>>
종료 조건이 매우 작아 더욱 정확한 최적화 결과를 얻었다.
x=
0.9955 1.0000
fval =
4.0847e-010
7
>>
매 iteration마다 결과를 보여 준다.
Norm of First-order
Iteration f(x) step optimality CG-iterations
0 2 4
1 0.197531 1.05409 1.19 1
2 0.0390184 0.222222 0.351 1
3 0.00770735 0.148148 0.104 1
4 0.00152244 0.0987654 0.0308 1
5 0.000300729 0.0658436 0.00913 1
6 5.94032e-005 0.0438957 0.00271 1
7 1.1734e-005 0.0292638 0.000802 1
8 2.31782e-006 0.0195092 0.000238 1
9 4.57842e-007 0.0130061 7.04e-005 1
10 9.04381e-008 0.00867077 2.09e-005 1
Optimization terminated: Relative function value changing by less than
OPTIONS.TolFun.
x=
1.0173 1.0000
fval =
8
9.0438e-008
>>
Norm of First-order
Iteration f(x) step optimality CG-iterations
0 2 4
1 0.197531 1.05409 1.19 1
2 0.0390184 0.222222 0.351 1
3 0.00770735 0.148148 0.104 1
4 0.00152244 0.0987654 0.0308 1
5 0.000300729 0.0658436 0.00913 1
6 5.94032e-005 0.0438957 0.00271 1
7 1.1734e-005 0.0292638 0.000802 1
8 2.31782e-006 0.0195092 0.000238 1
9 4.57842e-007 0.0130061 7.04e-005 1
10 9.04381e-008 0.00867077 2.09e-005 1
Optimization terminated: Relative function value changing by less than
OPTIONS.TolFun.
x=
1.0173 1.0000
fval =
9.0438e-008
>>
9
2.2 Constrained Optimization
배우게 될 함수는 fmincon으로 constraint를 가지는 nonlinear optimization problem을 풀기
위한 것이다. 이것은 constraints를 만족하는 영역 내에서 초기치 근처의 local minimum을
구하는 것이다.
10
for details. Used options are Display, TolX, TolFun, TolCon,
DerivativeCheck, Diagnostics, FunValCheck, GradObj, GradConstr,
Hessian, MaxFunEvals, MaxIter, DiffMinChange and DiffMaxChange,
LargeScale, MaxPCGIter, PrecondBandWidth, TolPCG, TypicalX, Hessian,
HessMult, HessPattern. Use the GradObj option to specify that FUN also
returns a second output argument G that is the partial derivatives of
the function df/dX, at the point X. Use the Hessian option to specify
that FUN also returns a third output argument H that is the 2nd
partial derivatives of the function (the Hessian) at the point X. The
Hessian is only used by the large-scale method, not the line-search
method. Use the GradConstr option to specify that NONLCON also returns
third and fourth output arguments GC and GCeq, where GC is the partial
derivatives of the constraint vector of inequalities C, and GCeq is the
partial derivatives of the constraint vector of equalities Ceq. Use
OPTIONS = [] as a place holder if no options are set.
11
of function evaluations in OUTPUT.funcCount, the algorithm used in
OUTPUT.algorithm, the number of CG iterations (if used) in OUTPUT.cgiterations,
the first-order optimality (if used) in OUTPUT.firstorderopt, and the exit
message in OUTPUT.message.
[X,FVAL,EXITFLAG,OUTPUT,LAMBDA,GRAD,HESSIAN]=FMINCON(FUN,X0,...)
returns the
value of the HESSIAN of FUN at the solution X.
Examples
FUN can be specified using @:
X = fmincon(@humps,...)
In this case, F = humps(X) returns the scalar function value F of the HUMPS
function
evaluated at X.
If FUN or NONLCON are parameterized, you can use anonymous functions to capture
the problem-dependent parameters. Suppose you want to minimize the objective
given in the function MYFUN, subject to the nonlinear constraint NONLCON, where
these two functions are parameterized by their second argument A and B,
respectively.
12
Here MYFUN and MYCON are M-file functions such as
function f = myfun(x,a)
f = x(1)^2 + a*x(2)^2;
and
To optimize for specific values of A and B, first assign the values to these
two parameters. Then create two one-argument anonymous functions that capture
the values of A and B, and call MYFUN and MYCON with two arguments. Finally,
pass these anonymous functions to FMINCON:
>>
sbject to
13
function [fval grad]=myfunc(x)
fval = (x(1)-1)^4+(x(2)-1)^2;
grad = [4*(x(1)-1)^3; 2*(x(2)-1)];
return
Norm of First-order
Iteration f(x) step optimality CG-iterations
0 1.4656 1.68
1 0.583368 0.554641 0.391 1
2 0.323028 0.407754 0.0741 1
3 0.2581 0.266541 0.00689 1
4 0.250619 0.147868 0.000721 1
5 0.250253 0.112756 0.00021 1
6 0.250147 0.0996705 5.34e-005 1
7 0.250109 0.0786136 9.17e-006 1
8 0.250101 0.0436172 5.13e-007 1
Optimization terminated: first-order optimality less than OPTIONS.TolFun,
and no negative/zero curvature detected in trust region model.
x=
0.8999 1.5000
fval =
0.2501
>>
sbject to
14
위의 최적화 문제는 bound constraints와 nonlinear constraints를 가진다. 이 문제를 풀기
위해 목적함수를 다음의 m-file (이름: myfunc.m) 로 말들자.
function [c ceq]=mynonlinear(x)
c = x(1)^2+x(2)^2-3;
ceq=[];
15
0.9000 1.0000
fval =
1.0000e-004
>>
sbject to
( , 와 동일)
( , 와 동일)
function [c ceq]=mynonlinear2(x)
c(1,1) = x(1)-0.9;
c(2,1) = 0.3-x(1);
c(3,1) = 0.5-x(2);
c(4,1) = x(2)-2.9;
c(5,1) = x(1)^2+x(2)^2-3;
ceq(1,1)=x(1)-x(2);
16
switching to medium-scale (line search).
> In fmincon at 260
x=
0.9000 0.9000
fval =
0.0101
17
created with the OPTIMSET function. See OPTIMSET for details. Used
options are Display, TolX, TolFun, DerivativeCheck, Diagnostics,
FunValCheck, Jacobian, JacobMult, JacobPattern, LineSearchType,
LevenbergMarquardt, MaxFunEvals, MaxIter, DiffMinChange and
DiffMaxChange, LargeScale, MaxPCGIter, PrecondBandWidth, TolPCG,
TypicalX. Use the Jacobian option to specify that FUN also returns a
second output argument J that is the Jacobian matrix at the point X.
If FUN returns a vector F of m components when X has length n, then J
is an m-by-n matrix where J(i,j) is the partial derivative of F(i)
with respect to x(j). (Note that the Jacobian J is the transpose of the
gradient of F.)
18
[X,FVAL,EXITFLAG,OUTPUT,JACOB]=FSOLVE(FUN,X0,...) returns the
Jacobian of FUN at X.
Examples
FUN can be specified using @:
x = fsolve(@myfun,[2 3 4],optimset('Display','iter'))
function F = myfun(x)
F = sin(x);
function F = myfun(x,a)
F = [ 2*x(1) - x(2) - exp(a*x(1))
-x(1) + 2*x(2) - exp(a*x(2))];
To solve the system of equations for a specific value of A, first assign the
value to A. Then create a one-argument anonymous function that captures
that value of A and calls MYFUN with two arguments. Finally, pass this anonymous
function to FSOLVE:
19
Reference page in Help browser
doc fsolve
>>
x1 x 2 x1 2 x 2 4
20