Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Lecture Note Chapter 1

Download as pdf or txt
Download as pdf or txt
You are on page 1of 83

ENGINEERING OPTIMIZATION

Lecture note:- Chapter 1 & 2

Tekalign lemma (Ass. professor)

NOVEMBER, 2023
CHAPTER ONE

INTRODUCTION TO OPTIMIZATION

What is optimization: historical development: Engineering Application of

optimization: optimization problems: vectors; constraints; objective function;

objective function surface.

CHAPTER TWO

CLASSICAL OPTIMIZATION TECHNIQUES

Single variable optimization: multivariable optimization with equality constraint:

solution by direct substitution; solution by the method of constrained variable;

solution by the method of language multipliers. Multiple variable optimizations

with inequality: Kuhn-Tucker condition; Constrained qualification.


CHAPTER ONE
INTRODUCTION TO OPTIMIZATION
What is optimization?
Historical Development
Engineering application of optimization
Components of optimization problems:
vectors; constraints; objective function; objective
function surface
What is optimization?
 The process of obtaining the best or optimal solution
from the feasible region.
 The selection of the best element from the given
alternative.
 Or it is the process of finding the maximum or
minimum value from the given alternative.
 is a tool in making decision and analyzing physical
system
Generally optimization is the process of maximizing or
minimizing a given function subjected to some
constraints.
Example 1: mode of transportation
Out of the given alternative which one is the best? The processes
of choosing the best among the given alternative is called
optimization.
Buss ==== less money----- more time
Taxi ==== more money------less time
Train====less money------relatively less time
(Buss + train + taxi)===????------?????
So here what is your objective?
Maximize Saving or minimizing time
Example 2 Obtaining the global/local maxima or minima
Example 3: Product mix
 If the company has different types of products, and a decision
taking how much to produce? for maximizing profit or
minimizing production cost.
Product mix for Apparel manufacturing firm
Products Resource used per unit of products
(T-Shirts and

Finishing
Overhea

/product
Threads
(Meter)

Cutting
Fabrics
(Gram)

Sewing
Pants)

Labor
(Birr)

(Birr)

Profit
(Min)

(Min)

(Min)
ds
Polo T-shirts 5 24 20 25 25 50 56 42
Basic T-shirts 13 26 15 10 15 10 30 36
Mock Neck T- 12 25 21 50 20 25 25 34
shirts
singlet’s 15 20 10 20 20 39 25 31
Available 1500 2000 30000 50000 80 124
resource

Table 1: Resources needed per unit of product.


 The ultimate goal of all such decisions is either to
minimize the effort required or to maximize the
desired benefit.
 Since the effort required or the benefit desired in any
practical situation can be expressed as a function of
certain decision variables.
OPTIMIZATION PROBLEM
Basic components of an optimization problem
It consists of three components:-
A. objective function
B. unknowns or variables
C. constraints
A. OBJECTIVE FUNCTION

An objective function expresses the main aim of the


model which is either to be minimized or maximized.
For example, in a manufacturing process, the aim
may be to maximize the profit or minimize the cost.
B. UNKNOWNS OR DECISION VARIABLES

is an unknown in an optimization problem


the unknown quantities that are expected to be estimated
as an output of the optimization problem.
 example:-amounts of different resources used or the time spent on each
activity.
Control the value of the objective function.
In the manufacturing problem, the variables may
include the amounts of different resources used or
the time spent on each activity.
C. CONSTRAINTS

 is the limitation that affects the objective function


 restrictions on decision variables.
 is something that limits or controls what you can do?
 Example:-production time, production cost, amounts
of available raw materials (inventory level), total
working hours(labor hour)
……………optimization problems continued

An optimization or a mathematical programming


problem can be stated as follows.

Where:- X is an n-dimensional vector called the design vector,


:- xn total number of decision variables
:- f(X) is objective function,
:-gj(X) and lj(X) are known as inequality and equality
constraints, respectively.
……………optimization problems continued
If p + m = 0 the problem is called an unconstrained
optimization problem.
……………optimization problems continued
 A problem where the objective function is to be
maximized (instead of minimized) can also be handled
with this standard problem statement since
maximization of a function f(X) is the same as
minimizing the negative of f(X)
 Similarly, the ‘≥’ type of inequality constraints can be
treated by reversing the sign of the constraint function
to form the ‘≤’ type of inequality
……………optimization problems continued
Classification of Optimization Problems
Based on the nature of equations for the objective
function and the constraints, optimization problems can
be classified as:-
HISTORICAL DEVELOPMENT
1. Traditional Optimization Methods
The existence of optimization methods can be traced to
the days of Newton, Lagrange, and Cauchy.
Calculus methods, Nonlinear programming, Geometric
programming, Quadratic programming, Linear
programming, Dynamic programming, Integer
programming, Multi objective programming, Network
methods: CPM and PERT, Game theory.
Historical development

 Isaac Newton (1642-1727)


(The development of differential calculus
methods of optimization)

 Joseph-Louis Lagrange (1736-1813)


(Calculus of variations, minimization of functionals,
method of optimization for constrained problems)

 Augustin-Louis Cauchy (1789-1857)


(Solution by direct substitution, steepest
descent method for unconstrained optimization)
Historical development…..cont’d

 Leonhard Euler (1707-1783)

(Calculus of variations, minimization of

functionals)

 Gottfried Leibnitz (1646-1716)

(Differential calculus methods

of optimization)
Historical development….cont’d

George Bernard Dantzig (1914-2005)


(Linear programming and Simplex method (1947))

Richard Bellman (1920-1984)


(Principle of optimality in dynamic
programming problems)

Harold William Kuhn (1925-)


(Necessary and sufficient conditions for the optimal solution of
programming problems, game theory)
Historical development…….cont’d

Albert William Tucker (1905-1995)


(Necessary and sufficient conditions
for the optimal solution of programming
problems, nonlinear programming, game
theory: his PhD student
was John Nash)

Von Neumann (1903-1957)


(game theory)
Historical development…….cont’d

Inventers Year Contributions


Isaac Newton 1642-1721 Differential calculus method of optimization
Joseph Louis Lagrange 1736-1813 Calculus of variation, minimization of functional, method of
optimization for constrained problems

Augustine Louis Cauchy 1789-1857 Solution by direct substitution, steepest descent method for
unconstrained optimization
Leonhard Euler 1707-1783 Calculus of variation, minimization of functional
Gottfried Leibnitz 1646-1716 Differentials calculus methods of optimization
George Bernard Dantzig 1914-2005 Linear programming and simplex methods

Richard bellman 1920-1984 Principles of optimality in dynamic programming problems


Harold William Kuhn 1925 Necessary and sufficient condition for the optimal solution of
programming problems, game theory

Abbert William Tucker 1905-1995 Necessary and sufficient condition for the optimal solution of
programming problems, non linear programming , game
theory
Von Neumann 1903-1957 Game programming
2. Modern Methods of Optimization
 The modern optimization methods, also sometimes
called nontraditional optimization methods, have
emerged as powerful and popular methods for solving
complex engineering optimization problems in recent
years.
 These methods include:- Genetic Algorithms, Simulated
Annealing, Particle Swarm Optimization, Ant Colony
Optimization, Neural Network-Based Optimization, and
Fuzzy Optimization.
Trends of modern optimization methods

1995
1992

1986
1985

1975 1975

Genetic Algorithms Simulated Annealing Neural Network Fuzzy Optimization Ant Colony Particle Swarm
Method Methods Methods Optimization Optimization
Algorithm
ENGINEERING APPLICATION OF OPTIMIZATION

 Some typical applications from different engineering


disciplines indicate the wide scope of the subject.
Manufacturing, Production, Inventory control,
Transportation, Scheduling, Networks, Finance,
Engineering, Mechanics, Economics, Control
engineering, Marketing, Policy Modeling
Optimization in its broadest sense can be applied to solve any
engineering problem, e.g.
Design of aircraft for minimum weight;
Optimal (minimum time) trajectories for space missions;
 minimum weight design of structures for earthquake;
Optimal design of electric networks;
Optimal production planning, resources allocation, scheduling;
shortest route;
Design of optimum pipeline networks;
 Minimum processing time in production systems;
 Optimal control.
Case 1. Application of Optimization
Case 2. Application of Optimization
CHAPTER TWO
CLASSICAL OPTIMIZATION TECHNIQUES
Contents:-
Single variable optimization:
 Multivariable Optimization with equality constraint:
solution by direct substitution; solution by the method of
constrained variable; solution by the method of langrage
multipliers.
 Multiple variable optimizations with inequality
constrain: Kuhn-Tucker condition
Introduction
 In this chapter, we will discuss the classical
optimization techniques:- with necessary and sufficient
conditions for obtaining the optimum solution of :-
 unconstrained single
and
 multivariable optimization problems.
We will also discussed about Constrained multivariable
problems with:-
 Equality and
 Inequality constraints
What is classical optimization techniques ?
 Is an optimization techniques used to obtain optimum
solution for non-linear , continuous and differentiable
functions optimization problem
 These techniques are analytical in nature to obtain
maximum and minimum points for unconstrained and
constrained continuous objective functions.
2.1 SINGLE VARIABLE OPTIMIZATION
 A single variable optimization problem is the
mathematical programming problem in which only
one variable in involved.
Let f(x) is a continuous function of single variable x
defied in interval [a, b].

(A) A function f(x) is said to have a local (relative) minima at x =


xo: If f(x0) ≤ f(x0 + h) for all sufficiently small positive and
negative values of h .
(B) A function f(x) is said to have a local (relative) maxima at x
= xo:
If f(x0) ≥ f(x0 + h), for all sufficiently small positive and
negative values of h.
Global minima

Local minimum Local minima

(C) A function f(x) is said to have Global (absolute)


maximum at x*, if f(x*) ≥ f(x) for all x in the domain.

(D) A function f(x) is said to have Global (absolute)


minimum at x*, if f(x*) ≤ f(x) for all x in the domain
over which f(x) is defined.
 Fig. 1 and 2 shows the difference between the Global and
local optimum points.

Fig. 1

Fig. 2
NECESSARY AND SUFFICIENT CONDITIONS
FOR OPTIMALITY
1. Necessary Condition
 If a function 𝑓(𝑥) is defined in the interval 𝑎 ≤ 𝑥 ≤ 𝑏
and has a relative minimum at 𝑥 = 𝑥∗ ,
Where 𝑎 ≤ 𝑥∗ ≤ 𝑏 and
if d𝑓(𝑥) exists as a finite number at 𝑥 = 𝑥∗ , then
 d𝑓(𝑥∗) = 0
2. Sufficient Condition
Suppose at point x* , the first derivative is zero and first
nonzero higher derivative is denoted by n, then
fn(x*) > 0 and if n is even, the function f(x) has local
minimum at the point x*
fn(x*) < 0, and if n is even, the function f(x) has local
maximum at the point x*
But if n is odd the function has neither a maximum
nor a minimum, it has inflection point.
Example 1 single variable optimization problem
Find the extreme point for the given function
f (x) = 3x4 – 4x3 – 24x2 + 48x + 15

1. Necessary Condition
f’ (x) = 0
f’ (x) = 12x3 – 12x2 – 48x + 48
= 12 (x3 – x2 – 4x + 4)
= 12(x -1) (x2 -4)
= 12(x – 1) (x + 2) (x – 2)
x = 1, x = -2, x = 2 ( extreme point/stationary point)
2. Sufficient Condition
f” (x) = 36x2 – 24x – 48
Find the value of f” (x) at x = 1, x = -2, x = 2
At x = 1, f” (1) = -36 (< 0, n is even)
At x = 2, f” (2) = 48 (> 0, n is even)
At x = -2, f” (-2) = 144 (>0, n is even)
From this we can say that the point :-
 x = 1 is relative maxima
 x = 2 and x = -2 are relative ( local) minima.
Therefore
 at x = 1, f(x) has relative or local maximum and it
value is 38,
 at x = 2, f(x) has relative minimum and its value is
31,
 at x = -2, f(x) has relative minimum and its value is
-97
Example 2
Determine the maximum and minimum values of the function:-
f (x) = 12x5 − 45x4 + 40x3 + 5
Example 2
Determine the maximum and minimum values of the
function:- f (x) = 12x5 − 45x4 + 40x3 + 5

Step 1.Necessary condition


f’(x) = 0
= 60x2(x-1) (x-2)=0
x=0, x=1, x=2
Step 2. Sufficient condition, find the f ’’(x)
f ’’(x) =240x3-540x2+240x
Solve the second derivative at x=0, x=1,x=2
At x=0, f ’’(0) = 240(0)3-540(0)2+240(0) = 0
At x=1, f ’’(1) = 240(1)3-540(1)2+240(1) = -60
At x=2 , f ’’(2) = 240(2)3-540(2)2+240(2) = 240
X = 0 is inflection point
X= 1 is relative maximum
X=2 is relative minimum
Exercise 1
Assume the following relationship for Revenue and Cost
functions. Find out at what level of output x, profit is maximum,
where x is measured in tons per week.

R(x) = 1000x – 2x2


C(x) = x3 - 59x2 + 1315x + 5000
P(x) = R(x) - c(x)
= (1000x - 2x2) – (x3 – 59x2 + 1315x + 5000)
= - x3 + 57x2 – 315x – 5000
1. Necessary condition

= -3x2 + 114x – 315………….(1)


= x2 + 38x + 105
(x-3) (x-35)= 0, x = 3, x = 35
2. Sufficiency condition
d2p/dx2 = -6x +114
At x= 3, -6(3) + 114 = 96 > 0 p is minimum at x = 3
At x = 35, -6(35) + 114 = -96 < 0, p is maximum at p = 35
Hence profit is maximum at x= 35 tons/week
Example 3
Exercise 2
A company has started selling a new type of smart phone at the price
of $110 − 0.05x, where x is the number of smart phones
manufactured per day. The parts for each smart phone cost $50 and
the labor and overhead for running the plant cost $6000 per day.
How many smart phones should the company manufacture and sell
per day to maximize profit?
2.2 MULTIVARIABLE OPTIMIZATION PROBLEM
What is multivariable optimization problem?
 two or more decision variables
Classification of multivariable optimization problem
Optimization techniques
 Solution by direct substitution
Solution by the method of constrained variation
Solution by method of Lagrange multipliers
 solution by Karush Kuhn Tucker (KKT) condition
Classification of multivariable optimization problem
1. MULTIVARIABLE OPTIMIZATION
PROBLEM
WITH EQUALITY CONSTRAINTS
A. SOLUTION BY DIRECT SUBSTITUTION
 In this method, the value of any variable from the constraint set is
put into the objective function.
 The problem reduces to unconstrained optimization problem and
can be solved by unconstrained optimization method.

NECESSARY CONDITION

If f(X) has an extreme point (maximum or minimum) at X = X*


and if the first partial derivatives of f (X) exist at X*, then
SUFFICIENT CONDITION
A sufficient condition for a stationary point X* to be an extreme
point is that the matrix of second partial derivatives (Hessian
matrix) of f (X) evaluated at X* is:
1. positive definite when X* is a relative minimum point,
2. negative definite when X* is a relative maximum point.
OR
OR
EXAMPLE 1
Minimize f(x) =x2 + y2 + z2
Subjected to, x + y + 2z =12
Z= (12-x-y)/2
f(x) = x2 + y2 + z2
F(x) = x2 + y2 +1/4(12 – x – y) 2  new unconstrained objective
function
1. Necessary condition
2x = ½[12 – x – y]
2y = ½ [12 – x – y]
x=y
(x, y, z) = (2, 2, 4)
2. Sufficiency condition
 evaluate the second partial derivative at point p(x, y, z) or at
extreme point.
d2f/dx2 = 5/2
d2f/dy2 = 5/2
d2f/dxdy = ½
d2f/ dydx = ½
 Formulate hessian matrix (evaluate hessian matrix at point p)
EXAMPLE 2
Example 3
B. SOLUTION BY LAGRANGE METHODS
 In mathematical optimization, the method of Lagrange
multipliers is a strategy for finding the local maxima and
minima of a function subject to equality constraints.

Optimize, Z = f(x), X = (x1, x2 … m ….xn)


s.t, gj(x) =0 , j = 1, 2, …….m (m < n)
X≥0
Now define the function:-

Eq….1

1, 2… m are known as Lagrange’s undetermined multipliers


1. The necessary condition for extreme of L are:-

Eq……2

Eq…. 3

Solving equation 2 and 3 we get:


The sufficient condition for the function to have extreme at point
X* , is that the values of k obtained from equation
Or generally to obtain the extreme point and the optimum
value we can follow the following steps
Example 1
Use the method of Lagrange Multipliers to find the extrema of the
following functions subject to the given constraints.
f(x) = xy
s.t. x + y = 6

L(x, y, ) = f(x) + g(x)


L(x, y, ) = xy + (6 – x – y)

y - = 0

x-=0

6–x–y=0 x* = y* = 3
Example 2
A factory manufactures HONDA CITY and HONDA CIVIC cars. Determine the
optimal number of HONDA CITY and HONDA CIVIC cars produced if the
factory capacity is 90 cars per day, and the cost of manufacturing is C (x, y)= 6x2
+ 12y2 , where x is the number of HONDA CITY cars and y is the number of
HONDA CIVIC cars produced.
Solution
Decision variables, x = no of Honda city car produced
y = no of Honda civic car produced
Objective is cost minimization
Minimize, f = 6x2 + 12y2
What are the constrains?
Total number of cars produced per day
x + y = 90
L(x, y,) = f(x) + g(x) = 6x2 + 12y2 + (90 – x – y)

12x -  = 0

24y -  = 0

90- x – y = 0,  = 720, x = 60, y = 30


Exercise 1
Use the method of Lagrange Multipliers to find the extrema of the
following functions subject to the given constraints
Minimize, f(x) = ½( x12 + x22 + x3 2)
s. t. g1(x) = x1 – x2 = 0
g2(x) = x1 + x2 + x3 = 0
2. MULTIPLE VARIABLE OPTIMIZATIONS WITH
INEQUALITY CONSTRAINTS:
Kuhn Tucker condition
If the point x*Rn is a minimizer for problem P1, it must
satisfy the following necessary condition for some
*Rm
Consider the following optimization problem
Minimize f(X)
subject to
gj (X) ≤ 0 for j=1,2,…,p
where the decision variable vector
X=[x1,x 2,…,x n ]
Kuhn-Tucker conditions for X* = [x1*, x2* . . . xn* ] to be a local
minimum are
 In case of minimization problems, if the constraints are of the
form gj (X) ≥ 0, then λj have to be non-positive.

 On the other hand, if the problem is one of maximization with


the constraints in the form gj (X) ≥ 0, then λj have to be non-
negative.
Example 1
A manufacturing firm producing small refrigerators has entered into
a contract to supply 50 refrigerators at the end of the first month, 50
at the end of the second month, and 50 at the end of the third. The
cost of producing x refrigerators in any month is given by
$(x2+1000). The firm can produce more refrigerators in any month
and carry them to a subsequent month. However, it costs $20 per
unit for any refrigerator carried over from one month to the next.
Assuming that there is no initial inventory, determine the number of
refrigerators to be produced in each month to minimize the total
cost.
Solution:

Let x1, x2, x3 represent the number of refrigerators produced in


the first, second and third month respectively. The total cost to
be minimized is given by:

Total cost= production cost + holding cost

f ( x1 , x2 , x3 )  ( x12  1000)  ( x22  1000)  ( x32  1000)  20( x1  50)  20( x1  x2  100)
 x12  x22  x32  40 x1  20 x2
Solution cont’d:
The constraints can be stated as:

g1 ( x1 , x2 , x3 )  x1  50  0
g 2 ( x1 , x2 , x3 )  x1  x2  100  0
g 3 ( x1 , x2 , x3 )  x1  x2  x3  150  0

The first Kuhn Tucker condition is given by:


f g1 g 2 g 3
 1  2  3  0, i  1,2,3
xi xi xi xi
that is
2 x1  40  1  2  3  0 (E1)
2 x2  20  2  3  0 (E2)
2 x3  3  0 (E3)
Solution cont’d

The second Kuhn Tucker condition is given by :


 j g j  0, j  1,2,3
that is
1 ( x1  50)  0 (E4)
2 ( x1  x2  100)  0 (E5)
3 ( x1  x2  x3  150)  0 (E6)

The third Kuhn Tucker condition is given by :


gj 0 j  1,2,3
that is,
( x1  50)  0 (E7)
( x1  x2  100)  0 (E8)
( x1  x2  x3  150)  0 (E9)
The fourth Kuhn Tucker condition is given by :
j  0 j  1,2,3
that is,
1  0 (E10)
2  0 (E11)
3  0 (E12)
Solution cont’d:
The solution of Eqs.(E1) to (E12) can be found in several ways.
We proceed to solve these equations by first noting that either
1=0 or x1=50 according to (E4).
Using this information, we investigate the following cases to
identify the optimum solution of the problem:

• Case I: 1=0
• Case II: x1=50
• Case I: 1=0
Equations (E1) to (E3) give
3
x3  
2
2 3
x2  10   (E13)
2 2
2 3
x1  20  
2 2
Substituting Equations (E13) into Eqs. (E5) and (E6) give:
2 (130  2  3 )  0
3
3 (180  2  3 )  0 (E14)
2
The four possible solutions of Eqs. (E14) are:
1. 2=0, -180- 2-3/2 3=0. These equations along with Eqs.
(E13) yield the solution:

2=0, 3=-120, x1=40, x2=50, x3=60


This solution satisfies Eqs.(E10) to (E12) but violates Eqs.(E7)
and (E8) and hence can not be optimum.
The second possible solution of Eqs. (E14) is:
2. 3=0, -130- 2-3=0. The solution of these equations leads
to:
2=-130, 3=0, x1=45, x2=55, x3=0
This solution satisfies Eqs.(E10) to (E12) but violates Eqs.(E7)
and (E9) and hence can not be optimum.
The third possible solution of Eqs. (E14) is:

3. 2=0, 3=0. Equations (E13) give:

x1=-20, x2=-10, x3=0

This solution satisfies Eqs.(E10) to (E12) but violates the

constraints Eqs.(E7) and (E9) and hence can not be optimum.

The third possible solution of Eqs. (E14) is:

4. -130- 2-3=0, -180- 2-3/2 3=0. The solutions of these


equations and Equations (E13) give:

5. 2 =-30, 3 =-100, x1=45, x2=55, x3=50


This solution satisfies Eqs.(E10) to (E12) but violates the
constraint Eq.(E7) and hence can not be optimum.
Case II: x1=50. In this case, Eqs. (E1) to (E3) give:
3  2 x3
2  20  2 x2  3  20  2 x2  2 x3 (E15)
1  40  2 x1  2  3  120  2 x2
Substitution of Eqs.(E15) in Eqs
2 ( x1  x2  100)  0 (E5)
3 ( x1  x2  x3  150)  0 (E6)

give:
(20  2 x2  2 x3 )( x1  x2  100)  0
(2 x3 )( x1  x2  x3  150)  0 (E16)
Solution cont’d:
Case II: x1=50. Once again, there are four possible solutions to
Eq.(E16) as indicated below:

1. -20 - 2x2 + 2x3 = 0, x1 + x2 + x3 -150 = 0: The solution of


these equations yields:

x1 = 50, x2 = 45, x3 = 55

This solution can be seen to violate Eq.(E8) which says:


2. -20 - 2x2 + 2x3 = 0, -2x3 = 0: The solution of these equations
yields:

x1 = 50, x2 = -10, x3 = 0

This solution can be seen to violate Eqs.(E8) and (E9) which say:

( x1  x2  100)  0 (E8)
( x1  x2  x3  150)  0 (E9)
3. x1 + x2 -100 = 0, -2x3 = 0: The solution of these equations yields:
x1 = 50, x2 = 50, x3 = 0
This solution can be seen to violate Eq. (E9) which say:

( x1  x2  x3  150)  0 (E9)
4. x1 + x2 -100 = 0, x1 + x2 + x3 -150 = 0 : The solution of these
equations yields:
x1 = 50, x2 = 50, x3 = 50
This solution can be seen to satisfy all the constraint Eqs.(E7-E9)
which say
( x1  50)  0 (E7)
( x1  x2  100)  0 (E8)
( x1  x2  x3  150)  0 (E9)

The values of 1 , 2 , and 3 corresponding to this solution can be


obtained from
3  2 x3
2  20  2 x2  3  20  2 x2  2 x3 (E15)
1  40  2 x1  2  3  120  2 x2

as 1  20, 2  20, 3  100


Since these values of i satisfy the requirements:
1  0 (E10)
2  0 (E11)
3  0 (E12)
this solution can be identified as the optimum solution. Thus

x1*  50, x2 *  50, x3 *  50

You might also like