Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Plant Design Term Report

Download as docx, pdf, or txt
Download as docx, pdf, or txt
You are on page 1of 13

Optimization Technique

OPTIMIZATION:
Reactors are often the critical stage in a polymerization process. Recently,
demands on the design and operation of chemical processes are increasingly
required to comply with the safety, cost and environmental concerns. All these
necessitate accurate modeling and optimization of reactors and processes. To
optimize the reactors, in this chapter we have discussed about optimization.

Introduction:
Historical development:
• Isaac Newton (1642-1727) (The development of differential calculus
methods of optimization)
• Joseph-Louis Lagrange (1736-1813 (Calculus of variations, minimization
of functionals,method of optimization for constrained problems)

• Augustin-Louis Cauchy (1789-1857) (Solution by direct


substitution,steepest descent method for unconstrained optimization)
• Leonhard Euler (1707-1783) (Calculus of variations, minimization of
functionals)
• Gottfried Leibnitz (1646-1716) (Differential calculus methods of
optimization)
• George Bernard Dantzig (1914-2005)
(Linear programming and Simplex method (1947))
• Richard Bellman (1920-1984)
(Principle of optimality in dynamic programming problems)
• Harold William Kuhn (1925-) (Necessary and sufficient conditions for
the optimal solution of programming problems, game theory)
• Albert William Tucker (1905-1995) (Necessary and sufficient conditions
for the optimal solution of programming problems, nonlinear
programming, game theory: his PhD student was John Nash)
• Von Neumann (1903-1957) (game theory)

How to Define a Model for Optimization?

We need to quantify the various elements of our model: decision variables,


constraints, and the objective function and their relationships.

5
Optimization Technique

 Decision Variables

Start with the decision variables. They usually measure the amounts of
resources, such as money, to be allocated to some purpose, or the level of some
activity, such as the number of products to be manufactured, the number of
pounds or gallons of a chemical to be blended, etc.

 Objective Function

Once we've defined the decision variables, the next step is to define the
objective, which is normally some function that depends on the variables. For
example, profit, production rate, Conversion, yield, various costs, etc. We
usually maximize or minimize objective function such as we maximize profit,
production rate, Conversion, yield and minimize various costs.

 Constraints

You'd be finished at this point, if the model did not require any constraints. For
example, in a curve-fitting application, the objective is to minimize the sum of
squared differences between each actual data value, or observation, and the
corresponding predicted value. This sum has a minimum value of zero, which
occurs only when the actual and predicted values are all identical. If you asked
a solver to minimize this objective function, you would not need any
constraints.

In most models, however, constraints play a key role in determining what values
can be assumed by the decision variables, and what sort of objective value can
be attained.

Constraints reflect real-world limits on production capacity, market demand,


available funds, and so on. To define a constraint, you first compute a value
based on the decision variables. Then you place a limit (<=, = or >=) on this
computed value.

General Constraints. For example, if A1:A5 contains the percentage of funds to


be invested in each of 5 stocks, you might use B1 to calculate =SUM (A1:A5),
and then define a constraint B1 = 1 to say that the percentages allocated must
sum up to 100%.

6
Optimization Technique

Bounds on Variables. Of course, you can also place a limit directly on a


decision variable, such as A1 <= 100. Upper and lower bounds on the variables
are efficiently handled by most optimizers and are very useful in many
problems.

 Policy Constraints

Some constraints are determined by policies that you or your organization may
set. For example, in an investment portfolio optimization, you might have a
limit on the maximum percentage of funds to be invested in any one stock, or
one industry group.

 Physical Constraints

Many constraints are determined by the physical nature of the problem. For
example, if your decision variables measure the number of products of different
types that you plan to manufacture, producing a negative number of products
would make no sense. This type of non-negativity constraint is very common.
Although it may be obvious to you, constraints such as A1 ≥ 0 must be stated
explicitly, because the solver has no other way to know that negative values are
disallowed.

As another example of a physically determined constraint, suppose you are


modeling product shipments in and out of a warehouse over time. You'll
probably need a balance constraint, which specifies that, in each time period,
the beginning inventory plus the products received minus the products shipped
out equals the ending inventory -- and hence the beginning inventory for the
next period.

 Integer Constraints

Advanced optimization software also allows you to specify constraints that


require decision variables to assume only integer (whole number) values at the
solution. If you are scheduling a fleet of trucks, for example, a solution that
called for a fraction of a truck to travel a certain route would not be useful.
Integer constraints normally can be applied only to decision variables, not to
quantities calculated from them.

7
Optimization Technique

A particularly useful type of integer constraint specifies that a variable must


have an integer value with a lower bound of 0, and upper bound of 1. This
forces the variable to be either 0 or 1 -- nothing in between -- at the final
solution. Hence, it can be used to model "yes/no" decisions. For example, you
might use a 0-1 or binary integer variable to represent a decision on whether to
lease a new machine. Your model might then calculate a fixed lease cost per
month, but also a lower cost per item processed with the machine, if it is used.

 Feasible Solution

A solution (set of values for the decision variables) for which all of the
constraints in the model are satisfied is called a feasible solution. Most solution
algorithms first try to find a feasible solution, and then try to improve it by
finding another feasible solution that increases the value of the objective
function (when maximizing, or decreases it when minimizing).

 Optimal Solution

An optimal solution is a feasible solution where the objective function reaches a


maximum (or minimum) value.

 Globally Optimal Solution

A globally optimal solution is one where there are no other feasible solutions
with better objective function values.

 Locally Optimal Solution

A locally optimal solution is one where there are no other feasible solutions "in
the vicinity" with better objective function values -- you can picture this as a
point at the top of a "peak" or at the bottom of a "valley" which may be formed
by the objective function and/or the constraints. The Solver is designed to find
optimal solutions -- ideally the global optimum -- but this is not always
possible.

Whether we can find a globally optimal solution, a locally optimal solution, or a


good solution depends on the nature of the mathematical relationship between

8
Optimization Technique

the variables and the objective function and constraints (and the solution
algorithm used).

What Makes a Model Hard to Solve?

Solver models can be easy or hard to solve. "Hard" models may require a great
deal of CPU time and random-access memory (RAM) to solve -- if they can be
solved at all. The good news is that, with today's very fast PCs and advanced
optimization software from Frontline Systems, a very broad range of models
can be solved.

Three major factors interact to determine how difficult it will be to find an


optimal solution to a solver model:

a) The mathematical relationships between the objective and constraints,


and the decision variables
b) The size of the model (number of decision variables and constraints) and
its sparsity
c) The use of integer variables - memory and solution time may rise
exponentially as you add more integer variables

Mathematical Model:

a) Linear programming problems


b) Smooth nonlinear optimization problems
c) Global optimization problems
d) Non smooth optimization problems

The types of mathematical relationships in a model (for example, linear or


nonlinear, and especially convex or non-convex) determine how hard it is to
solve, and the confidence you can have that the solution is truly optimal. These
relationships also have a direct bearing on the maximum size of models that can
be realistically solved.

A model that consists of mostly linear relationships but a few nonlinear


relationships generally must be solved with more "expensive" nonlinear
optimization methods. The same is true of models with mostly linear or smooth
nonlinear relationships, but a few non smooth relationships. Hence, a single IF

9
Optimization Technique

or CHOOSE function that depends on the variables can turn a simple linear
model into an extremely difficult or even unsolvable non smooth model.

A few advanced solvers can break down a problem into linear, smooth
nonlinear and non-smooth parts and apply the most appropriate method to each
part -- but in general, you should try to keep the mathematical relationships in a
model as simple (i.e. close to linear) as possible.

Below are some general statements about solution times on modern Windows
PCs (with, say, 2GHz CPUs and 512MB of RAM), for problems without integer
variables

Linear Programming Problems -- where all of the relationships are linear, and
hence convex -- can be solved up to hundreds of thousands of variables and
constraints, given enough memory and time. Models with tens of thousands of
variables and constraints can be solved in minutes (sometimes in seconds) on
modern PCs. You can have very high confidence that the solutions obtained are
globally optimal.

Smooth Nonlinear Optimization Problems -- where all of the relationships are


smooth functions (i.e. functions whose derivatives are continuous) -- can be
solved up to tens of thousands of variables and constraints, given enough
memory and time. Models with thousands of variables and constraints can
often be solved in minutes on modern PCs.

If the problem is convex, you can have very high confidence that the solutions
obtained are globally optimal. If the problem is non-convex, you can have
reasonable confidence that the solutions obtained are locally optimal, but not
globally optimal.

Global Optimization Problems -- smooth nonlinear, non-convex optimization


problems where a globally optimal solution is sought -- can often be solved up
to a few hundred variables and constraints, given enough memory and time.
Depending on the solution method, you can have reasonably high confidence
that the solutions obtained are globally optimal.

Non smooth Optimization Problems -- where the relationships may include


functions like IF, CHOOSE, LOOKUP and the like -- can be solved up to
scores, and occasionally up to hundreds of variables and constraints, given

10
Optimization Technique

enough memory and time. You can only have confidence that the solutions
obtained are "good" (i.e. better than many alternative solutions) -- they are not
guaranteed to be globally or even locally optimal.

Model Size

The size of a solver model is measured by the number of decision variables and
the number of constraints it contains.

Most optimization software algorithms have a practical upper limit on the size
of models they can handle, due to either memory requirements or numerical
stability.

Sparsity

Most large solver models are sparse. This means that, while there may be
thousands of decision variables and thousands of constraints, the typical
constraint depends on only a small subset of the variables. For example, a
linear programming model with 10,000 variables and 10,000 constraints could
have a "coefficient matrix" with 10,000 x 10,000 = 100 million elements, but in
practice, the number of nonzero elements is likely to be closer to 2 million.

Integer Variables

Integer variables (i.e. decision variables that are constrained to have integer
values at the solution) in a model make that model far more difficult to solve.
Memory and solution time may rise exponentially as you add more integer
variables. Even with highly sophisticated algorithms and modern
supercomputers, there are models of just a few hundred integer variables that
have never been solved to optimality.

This is because many combinations of specific integer values for the variables
must be tested, and each test requires the solution of a "normal" linear or
nonlinear optimization problem. The number of combinations can rise
exponentially with the size of the problem. "Branch and bound" and "branch
and cut" strategies help to cut down on this exponential growth, but even with
these strategies, solutions for even moderately large mixed-integer
programming (MIP) problems can require a great deal of time.

11
Optimization Technique

Optimization Solution Methodologies:

Solving and obtaining the optimum values is the last phase in design
optimization.
A number of general methods for solving the programmed optimization
problems relate
Various relations and constraints that describe the process to their effect on
objective function. these examine the effect of variables on the objective function
using analytic, graphical, and algorithmic techniques based on the principles of
optimization and programming methods.

Procedure with One Variable:

There are many cases in which the factor being minimized (or maximized) is an analytic
function of a single variable. The procedure then becomes very simple. Consider the ex-
ample, where it is necessary to obtain the insulation thickness that gives the least total
cost. The primary variable involved is the thickness of the insulation, and
relationships can be developed showing how this variable affects all costs.

Cost data for the purchase and installation of the insulation are available, and the
length of service life can be estimated. Therefore, a relationship giving the effect of
insulation thickness on fixed charges can be developed. Similarly, a relationship
showing the cost of heat lost as a function of insulation thickness can be obtained
from data on the thermal properties of steam, properties of the insulation, and heat-
transfer con-siderati6"ns. All other costs, such as maintenance and plant expenses,
can be assumed to be independent of the insulation thickness.

Procedure with Two or More Variables:


When two or more independent variables affect the objective function, the procedure
for determining the optimum conditions may become rather tedious; however, the gen-
eral approach is the same as when only one variable is involved. Consider the case in
which the total cost for a given operation is a function of the two independent

variables x and y, or
CT = f(x,y)

12
Optimization Technique

By analyzing all the costs involved and reducing the resulting relationships to a simple
form, the following function might be found.

CT = ax +b/ xy + cy+d
Where a, b, c, and d are positive constants.
Graphical Procedure: The relationship among CT, X , and y could be shown as a
curved surface in a three-dimensional plot, with a minimum value of CT occurring at
the optimum values of x and v. However, the use of a three-dimensional plot is not
practical for most engineering determinations.

Analytical Procedure:

The optimum value of x is found at the point where (dCT/dx)y=yi , is equal to zero.
Similarly, the same results would be obtained if y were used as the abscissa instead of x.
If this were done, the optimum value of y (that is yi) would be found at the point where
(dCr/dy)x=xi., is equal to zero. This immediately indicates an analytical procedure for
determining optimum values.

Algorithm Solutions to Optimization Problems:


An algorithm is simply an objective mathematical method for solving a problem and is
purely mechanical so that it can be programmed for a computer. Solution of program-
ming problems generally requires a series of actions that are iterated to a solution,
based on a programming method and various numerical calculation methods. The
input to the algorithm can be manual, where the relations governing the design behavior
are added to the algorithm. They can also be integrated or set to interface with
rigorous computer simulations that describe the design.
Use of algorithms thus requires the selection of an appropriate programming

method, methods, or combination of methods as the basic principals for the algorithm
function. It also requires the provision of the objective functions and constraints, either
as directly provided relations or from computer simulation models. The algorithm then
uses the basic programming approach to solve the optimization problem set by the
objective function and constraints.

13
Optimization Technique

Linear Programming Algorithm Development:

To develop this form of approach for linear programming solutions, a set of linear
inequalities which form the constraints are written in the form of "equal to or less
than" equations as

a11v1  a12 v2  ...  a1n vn b 1

a 21v1  a 22 v2  ...  a 2 n vn  b2
................................................
a m1v1  a m 2 v2  ...  a mn vn  bm
................................................

am1v1  am 2 v2  ...  amn vn  bm


or in general summation form

a v
j 1
ij j  bi i = 1,2,….,m

for
v j  0 j=1,2,……,n
where i refers to rows (or equation number) in the set of inequalities and j refers
to columns (or variable number).
As indicated earlier, these inequalities can be changed to equalities
by adding a set of slack variables vn1 ,...., vn m (here v is used in place of S to
simplify the generalized expressions), so that

a11v1  a12v2  ...  a1n vn  vn1  b1


a21v1  a22v2  ...  a2n vn  vn2  b2
.......................................................
am1v1  am 2v2  ...  amnvn  vn m  bm

or in general summation form

14
Optimization Technique

 (a v
j 1
ij j  vni )  bi i= 1,2,…,m

for
v j ≥0 j=1,2,…..,n+m
In addition to the constraining equations, there is an objective function for the
linear program which is expressed in the form of
z=maximum(or minimum) of c1v1  c2v2  ....  c j v j  ....cn vn
where the variables v j are subject to v j ≥0(j=1,2,….,n+m). Note that, in this
case, all the variables above vn are slack variables and provide no direct
contribution to the value of the objective function.

Simplex Algorithm:

The basis for the simplex method is the generation of extreme-point solutions by
starting at any one extreme point for which a feasible solution is known and then
proceeding to a neighboring extreme point. Special rules are followed that cause the
generation of each new extreme, point to be an improvement toward the desired
objective function. When the extreme point is reached where no further im-
provement is possible, this will represent the desired optimum feasible solution. Thus,
the simplex algorithm is an iterative process that starts at one extreme-point feasible
solution, tests this point for optimality, and proceeds toward an improved solution. If
an optimal solution exists, this algorithm can be shown to lead ultimately and effi-
ciently to the optimal solution.
The stepwise procedure for the simplex algorithm is as follows (based on the
optimum being a maximum):
1. State the linear programming problem in standard equality form.
2. Establish the initial feasible solution from which further iterations can proceed. A
common method to establish this initial solution is to base it on the values of the
slack variables, where all other variables are assumed to be zero. With this as-
sumption, the initial matrix for the simplex algorithm can be set up with a column
showing those variables that will be involved in the first solution. The coefficient
for these variables appearing in the matrix table should be 1 with the rest of column
being 0.
3. Test the initial feasible solution for optimality. The optimality test is accomplished
by the addition of rows to the matrix which give a value of Zj for each column,

15
Optimization Technique

where Zj is defined as the sum of the objective function coefficient for each solu-
tion variable.

4. Iteration toward the optimal solution is accomplished as follows: Assuming that


the optimality test indicates that the optimal program has not been found, the
following iteration procedure can be used:
 Find the column in the matrix with the maximum value of cj - Zj and designate this
column as k. The incoming variable for the new test will be the variable at the
head of this column.
 For the matrix applying to the initial feasible solution, add a column showing the
ratio bi /aik.
 Find the minimum positive value of this ratio, and designate the variable in the
corresponding row as the outgoing variable.
 Set up a new matrix with the incoming variable, as determined under (a), sub-
stituted for the outgoing variable, as determined under (b). The modification of the
table accomplished by matrix operations so that the entering variable will have
a 1 in the row of the departing variable and 0s in the rest of that column. The
matrix operations involve row manipulations of multiplying rows by constants and
subtracting from or adding to other rows until the necessary 1 and 0 values are
reached.
 Apply the optimality test to the new matrix.
 Continue the iterations until the optimality test indicates that the optimum
objective function has been attained.

Special cases:
 If the initial solution obtained by use of the method given in the preceding is not
feasible, a feasible solution can be obtained by adding more artificial variables
which must then be forced out of the final solution.

 Degeneracy may occur in the simplex method when the outgoing variable is
selected. If there are two or more minimal values of the same size, the problem is
degenerate, and a poor choice of the outgoing variable may result in cycling,
although cycling almost never occurs in real problems. This can be eliminated, by
a method of multiplying each element in the rows in question by the positive
coefficients of the kth column and choosing the row for the outgoing variable as
the one first containing the smallest algebraic ratio.

16
Optimization Technique

 The preceding method for obtaining a maximum as the objective function can
be applied to the case when the objective function is a minimum by recognizing
that maximizing the negative of a function is equivalent to minimizing the
function.

References
 www.naiopva.org
 books.google.com.pk
 Plant Design and Economics for Chemical Engineers

17

You might also like