Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

1 Ot 16092022

Download as pdf or txt
Download as pdf or txt
You are on page 1of 49

Optimization Techniques

Optimization Books
Textbooks:

1. G.R.Walsh: Methods of optimization; John Wiley & Sons.


2. S.S.Rao: Optimization theory and applications; Wiley Eastern limited
New Delhi.
3. Spall J.C., Introduction to Stochastic Search and Optimization,
Estimation, Simulation and Control, Wiley, 595 pp, 2003.
4. Chong E.K.P. and Zak S.H., An Introduction to Optimization, Second
Edition, John Wiley & Sons, New York, 476 pp, 2001.
5. Rao S.S., Engineering Optimization - Theory and Practice, John Wiley &
Sons, New York, 903 pp, 1996.
6. Gill P.E., Murray W. and Wright M.H., Practical Optimization, Elsevier,
401 pp., 2004.
7. Goldberg D.E., Genetic Algorithms in Search, Optimization and Machine
Learning, Addison Wesley, Reading, Mass., 1989.
8. S. Boyd and L. Vandenberghe, Convex Optimization, Cambridge
University Press, 2004.(available at
http://www.stanford.edu/~boyd/cvxbook/)
2
COURSE # MATH : 607
COURSE TITLE : OPTIMIZATION TECHNIQUES

(i) INTRODUCTION:
•Definition of Optimization problems and techniques.
•Mathematical Models.
•Local and global Extrema (Optima) of a function of one and more
than one variables and inflexion points.
•Types of optimization techniques.
•Derivation of Necessary and Sufficient conditions for an extremum of
a function of one and more than one variables.
•Langrange’s Multipliers techniques.

3
(ii) UNCONSTRAINED OPTIMIZATION FOR FUNCTIONS:

(a) DECENT METHODS (LINE SEARCH METHODS):

• Gradient of a function.
• Quadratic forms of a function.
• Hessian matrix.
• Positive and negative definite matrices, Indefinite matrices.
• Steepest-Descent method.
• Newton’s Method.
• Convergence criteria.
• Variable metric method- avidon-Fletcher –powell Method)

4
(b) DIRECT SEARCH METHODS:
•Unimodal function.
•Simplex Method of Nelder & Mead.
•Method of Hook & Jeaves Fibonacci Method.
•Quadratic Interpolation Powell’s method.
(c) UNIVARIATE SEARCH AND POWELL’S METHOD:

(iii) OPTIMIZATION OF FUNCTIONALS:


•Functionals.
•Extrema of a functional.
•Variational Problems.
•Variational Problems in n-dimensions.
•The Euler-Lagrange’s equation.
•Rayleigh-Ritz Method.

5
1. Introduction
• Optimization is the act of obtaining the best result under given
circumstances.

• Optimization can be defined as the process of finding the conditions


that give the maximum or minimum of a function.

• The optimum seeking methods are also known as mathematical


programming techniques and are generally studied as a part of
operations research.

• Operations research is a branch of mathematics concerned with the


application of scientific methods and techniques to decision making
problems and with establishing the best or optimal solutions.

6
1. Introduction
• Operations research (in the UK) or operational research (OR)
(in the US) or is an interdisciplinary branch of mathematics which
uses methods like:

– mathematical modeling
– statistics
– algorithms to arrive at optimal or good decisions in complex
problems which are concerned with optimizing the maxima (profit,
faster assembly line, greater crop yield, higher bandwidth, etc) or
minima (cost loss, lowering of risk, etc) of some objective function.

• The eventual intention behind using operations research is to


elicit a best possible solution to a problem mathematically, which
improves or optimizes the performance of the system.

7
1. Introduction

8
1. Introduction
Historical development

• Isaac Newton (1642-1727)


(The development of differential calculus
methods of optimization)

• Joseph-Louis Lagrange (1736-1813)


(Calculus of variations, minimization of functionals,
method of optimization for constrained problems)

• Augustin-Louis Cauchy (1789-1857)


(Solution by direct substitution, steepest
descent method for unconstrained optimization)

9
1. Introduction
Historical development

• Leonhard Euler (1707-1783)


(Calculus of variations, minimization of
functionals)

• Gottfried Leibnitz (1646-1716)


(Differential calculus methods
of optimization)

10
1. Introduction
Historical development

• George Bernard Dantzig (1914-2005)


(Linear programming and Simplex method (1947))

• Richard Bellman (1920-1984)


(Principle of optimality in dynamic
programming problems)

• Harold William Kuhn (1925-)


(Necessary and sufficient conditions for the optimal solution of
programming problems, game theory)

11
1. Introduction
Historical development

• Albert William Tucker (1905-1995)


(Necessary and sufficient conditions
for the optimal solution of programming
problems, nonlinear programming, game
theory: his PhD student
was John Nash)

• Von Neumann (1903-1957)


(game theory)

12
1. Introduction
• Mathematical optimization problem:

• f0 : Rn R: objective function
• x=(x1,…..,xn): design variables (unknowns of the problem,
they must be linearly independent)
• gi : Rn R: (i=1,…,m): inequality constraints

• The problem is a constrained optimization problem

13
1. Introduction
• If a point x* corresponds to the minimum value of the function f (x), the
same point also corresponds to the maximum value of the negative of
the function, -f (x). Thus optimization can be taken to mean
minimization since the maximum of a function can be found by seeking
the minimum of the negative of the same function.

14
1. Introduction
Constraints

• Behaviour constraints: Constraints that represent limitations on


the behaviour or performance of the system are termed behaviour or
functional constraints.

• Side constraints: Constraints that represent physical limitations on


design variables such as manufacturing limitations.

15
1. Introduction
Constraint Surface
• For illustration purposes, consider an optimization problem with only
inequality constraints gj (X) ≤ 0. The set of values of X that satisfy
the equation gj (X) =0 forms a hypersurface in the design space and
is called a constraint surface.

16
1. Introduction
Constraint Surface
• Note that this is a (n-1) dimensional subspace, where n is the
number of design variables. The constraint surface divides the
design space into two regions: one in which gj (X) ≤ 0 and the other
in which gj (X) ≥0.

17
1. Introduction
Constraint Surface
• Thus the points lying on the hypersurface will satisfy the constraint
gj (X) critically whereas the points lying in the region where gj (X) >0
are infeasible or unacceptable, and the points lying in the region
where gj (X) < 0 are feasible or acceptable.

18
1. Introduction
Constraint Surface
• In the below figure, a hypothetical two dimensional design space is
depicted where the infeasible region is indicated by hatched lines. A
design point that lies on one or more than one constraint surface is
called a bound point, and the associated constraint is called an
active constraint.

19
1. Introduction
Constraint Surface
• Design points that do not lie on any constraint surface are known as
free points.

20
1. Introduction
Constraint Surface

Depending on whether a
particular design point belongs to
the acceptable or unacceptable
regions, it can be identified as one
of the following four types:

• Free and acceptable point

• Free and unacceptable point

• Bound and acceptable point

• Bound and unacceptable point

21
1. Introduction
• The conventional design procedures aim at finding an acceptable or
adequate design which merely satisfies the functional and other
requirements of the problem.

• In general, there will be more than one acceptable design, and the
purpose of optimization is to choose the best one of the many
acceptable designs available.

• Thus a criterion has to be chosen for comparing the different


alternative acceptable designs and for selecting the best one.

• The criterion with respect to which the design is optimized, when


expressed as a function of the design variables, is known as the
objective function.

22
1. Introduction
• In civil engineering, the objective is usually taken as the
minimization of the cost.

• In mechanical engineering, the maximization of the


mechanical efficiency is the obvious choice of an objective
function.

• In aerospace structural design problems, the objective


function for minimization is generally taken as weight.

• In some situations, there may be more than one criterion to be


satisfied simultaneously. An optimization problem involving
multiple objective functions is known as a multi-objective
programming problem.
23
1. Introduction
• With multiple objectives there arises a possibility of conflict,
and one simple way to handle the problem is to construct
an overall objective function as a linear combination of the
conflicting multiple objective functions.

• Thus, if f1 (X) and f2 (X) denote two objective functions,


construct a new (overall) objective function for optimization
as:

where α1 and α2 are constants whose values indicate the


relative importance of one objective function to the other.

24
1. Introduction
• The locus of all points satisfying f (X) = c = constant forms a
hypersurface in the design space, and for each value of c there
corresponds a different member of a family of surfaces. These surfaces,
called objective function surfaces, are shown in a hypothetical
two-dimensional design space in the figure below.

25
1. Introduction
• Once the objective function surfaces are drawn along with the constraint
surfaces, the optimum point can be determined without much difficulty.
• But the main problem is that as the number of design variables exceeds
two or three, the constraint and objective function surfaces become
complex even for visualization and the problem has to be solved purely
as a mathematical problem.

26
Examples

Design of civil engineering structures


• variables: width and height of member cross-sections
• constraints: limit stresses, maximum and minimum dimensions
• objective: minimum cost or minimum weight

Analysis of statistical data and building empirical models


from measurements
• variables: model parameters
• Constraints: physical upper and lower bounds for model parameters
• Objective: prediction error

27
Classification of optimization problems

Classification based on:

• Constraints
– Constrained optimization problem
– Unconstrained optimization problem

• Nature of the design variables


– Static optimization problems
– Dynamic optimization problems

28
Classification of optimization problems

Classification based on:

• Physical structure of the problem


– Optimal control problems
– Non-optimal control problems

• Nature of the equations involved


– Nonlinear programming problem
– Geometric programming problem
– Quadratic programming problem
– Linear programming problem

29
Classification of optimization problems

Classification based on:

• Permissable values of the design variables


– Integer programming problems
– Real valued programming problems

• Deterministic nature of the variables


– Stochastic programming problem
– Deterministic programming problem

30
Classification of optimization problems

Classification based on:

• Separability of the functions


– Separable programming problems
– Non-separable programming problems

• Number of the objective functions


– Single objective programming problem
– Multiobjective programming problem

31
Multiobjective Programming
Problem
• A multiobjective programming problem can be stated as follows:

Find X which minimizes f1 (X), f2 (X),…., fk (X)


subject to

where f1 , f2,…., fk denote the objective functions to be minimized


simultaneously.

32
Review of mathematics
Concepts from linear algebra:
Positive definiteness
• Test 1: A matrix A will be positive definite if all its
eigenvalues are positive; that is, all the values of λ that satisfy
the determinental equation

should be positive. Similarly, the matrix A will be negative


definite if its eigenvalues are negative.

33
Review of mathematics
Positive definiteness
• Test 2: Another test that can be used to find the positive definiteness
of a matrix A of order n involves evaluation of the determinants

• The matrix A will be positive definite if and only if all the values A1,
A2, A3,…An are positive
• The matrix A will be negative definite if and only if the sign of Aj is
(-1)j for j=1,2,…,n
• If some of the Aj are positive and the remaining Aj are zero, the matrix
A will be positive semidefinite
34
Review of mathematics

Negative definiteness

• Equivalently, a matrix is negative-definite if all its


eigenvalues are negative

• It is positive-semidefinite if all its eigenvalues are all


greater than or equal to zero

• It is negative-semidefinite if all its eigenvalues are all


less than or equal to zero

35
Review of mathematics
Concepts from linear algebra:

Nonsingular matrix: The determinant of the matrix is not


zero.

Rank: The rank of a matrix A is the order of the largest


nonsingular square submatrix of A, that is, the largest
submatrix with a determinant other than zero.

36
Review of mathematics
Solutions of a linear problem
Minimize f(x)=cTx
Subject to g(x): Ax=b
Side constraints: x ≥0
• The existence of a solution to this problem depends on the
rows of A.

• If the rows of A are linearly independent, then there is a unique


solution to the system of equations.

• If det(A) is zero, that is, matrix A is singular, there are either


no solutions or infinite solutions.
37
Review of mathematics
Suppose

The new matrix A* is called the augmented matrix- the columns of b are added
to A. According to the theorems of linear algebra:

• If the augmented matrix A* and the matrix of coefficients A have the same rank
r which is less than the number of design variables n: (r < n), then there are
many solutions.

• If the augmented matrix A* and the matrix of coefficients A do not have the
same rank, a solution does not exist.

• If the augmented matrix A* and the matrix of coefficients A have the same rank
r=n, where the number of constraints is equal to the number of design variables,
then there is a unique solution.
38
Review of mathematics
In the example

The largest square submatrix is a 2 x 2 matrix (since m = 2


and m < n). Taking the submatrix which includes the first
two columns of A, the determinant has a value of 2 and
therefore is nonsingular. Thus the rank of A is 2 (r = 2). The
same columns appear in A* making its rank also 2. Since
r < n, infinitely many solutions exist.

39
Review of mathematics
In the example

One way to determine the solutions is to assign ( n-r) variables arbitrary


values and use them to determine values for the remaining r variables.
The value n-r is often identified as the degree of freedom for the system
of equations.

In this example, the degree of freedom is 1 (i.e., 3-2). For instance x3 can
be assigned a value of 1 in which case x1=0.5 and x2=1.5

40
Homework
What is the solution of the system given below?
Hint: Determine the rank of the matrix of the coefficients and
the augmented matrix.

41
2. Classical optimization techniques
Single variable optimization

• Useful in finding the optimum solutions of continuous and differentiable


functions

• These methods are analytical and make use of the techniques of


differential calculus in locating the optimum points.

• Since some of the practical problems involve objective functions that are
not continuous and/or differentiable, the classical optimization techniques
have limited scope in practical applications.

42
2. Classicial optimization techniques
Single variable optimization

• A function of one variable f (x)


has a relative or local minimum
at x = x* if f (x*) ≤ f (x*+h) Local
for all sufficiently small minimum
positive and negative values of
h
Global minima
• A point x* is called a relative or Local minima
local maximum if f (x*) ≥ f
(x*+h) for all values of h
sufficiently close to zero.

43
2. Classicial optimization techniques
Single variable optimization
• A function f (x) is said to have a global or absolute minimum
at x* if f (x*) ≤ f (x) for all x, and not just for all x close to
x*, in the domain over which f (x) is defined.

• Similarly, a point x* will be a global maximum of f (x) if f


(x*) ≥ f (x) for all x in the domain.

44
Necessary condition

• If a function f (x) is defined in the


interval a ≤ x ≤ b and has a relative
minimum at x = x*, where a < x* < b,
and if the derivative df (x) / dx = f’(x)
exists as a finite number at x = x*, then
f’ (x*)=0

• The theorem does not say that the


function necessarily will have a
minimum or maximum at every point
where the derivative is zero. e.g. f’ (x)=0
at x= 0 for the function shown in figure.
However, this point is neither a
minimum nor a maximum. In general, a
point x* at which f’(x*)=0 is called a
stationary point.
45
Necessary condition

• The theorem does not say what


happens if a minimum or a
maximum occurs at a point x*
where the derivative fails to exist.
For example, in the figure

depending on whether h
approaches zero through positive
or negative values, respectively.
Unless the numbers or are
equal, the derivative f’ (x*) does
not exist. If f’ (x*) does not exist,
the theorem is not applicable.
46
Sufficient condition

• Let f’(x*)=f’’(x*)=…=f (n-1)(x*)=0, but f(n)(x*) ≠ 0. Then f(x*)


is
– A minimum value of f (x) if f (n)(x*) > 0 and n is even
– A maximum value of f (x) if f (n)(x*) < 0 and n is even
– Neither a minimum nor a maximum if n is odd

47
Example
Determine the maximum and minimum values of the function:

Solution: Since f’(x)=60(x4-3x3+2x2)=60x2(x-1)(x-2),


f’(x)=0 at x=0,x=1, and x=2.

The second derivative is:

At x=1, f’’(x)=-60 and hence x=1 is a relative maximum. Therefore,


fmax= f (x=1) = 12

At x=2, f’’(x)=240 and hence x=2 is a relative minimum. Therefore,


fmin= f (x=2) = -11
48
Example

Solution cont’d:
At x=0, f’’(x)=0 and hence we must investigate the next derivative.

Since at x=0, x=0 is neither a maximum nor a minimum, and it is an


inflection point.

49

You might also like