Plant Design Term Report
Plant Design Term Report
Plant Design Term Report
OPTIMIZATION:
Reactors are often the critical stage in a polymerization process. Recently,
demands on the design and operation of chemical processes are increasingly
required to comply with the safety, cost and environmental concerns. All these
necessitate accurate modeling and optimization of reactors and processes. To
optimize the reactors, in this chapter we have discussed about optimization.
Introduction:
Historical development:
• Isaac Newton (1642-1727) (The development of differential calculus
methods of optimization)
• Joseph-Louis Lagrange (1736-1813 (Calculus of variations, minimization
of functionals,method of optimization for constrained problems)
5
Optimization Technique
Decision Variables
Start with the decision variables. They usually measure the amounts of
resources, such as money, to be allocated to some purpose, or the level of some
activity, such as the number of products to be manufactured, the number of
pounds or gallons of a chemical to be blended, etc.
Objective Function
Once we've defined the decision variables, the next step is to define the
objective, which is normally some function that depends on the variables. For
example, profit, production rate, Conversion, yield, various costs, etc. We
usually maximize or minimize objective function such as we maximize profit,
production rate, Conversion, yield and minimize various costs.
Constraints
You'd be finished at this point, if the model did not require any constraints. For
example, in a curve-fitting application, the objective is to minimize the sum of
squared differences between each actual data value, or observation, and the
corresponding predicted value. This sum has a minimum value of zero, which
occurs only when the actual and predicted values are all identical. If you asked
a solver to minimize this objective function, you would not need any
constraints.
In most models, however, constraints play a key role in determining what values
can be assumed by the decision variables, and what sort of objective value can
be attained.
6
Optimization Technique
Policy Constraints
Some constraints are determined by policies that you or your organization may
set. For example, in an investment portfolio optimization, you might have a
limit on the maximum percentage of funds to be invested in any one stock, or
one industry group.
Physical Constraints
Many constraints are determined by the physical nature of the problem. For
example, if your decision variables measure the number of products of different
types that you plan to manufacture, producing a negative number of products
would make no sense. This type of non-negativity constraint is very common.
Although it may be obvious to you, constraints such as A1 ≥ 0 must be stated
explicitly, because the solver has no other way to know that negative values are
disallowed.
Integer Constraints
7
Optimization Technique
Feasible Solution
A solution (set of values for the decision variables) for which all of the
constraints in the model are satisfied is called a feasible solution. Most solution
algorithms first try to find a feasible solution, and then try to improve it by
finding another feasible solution that increases the value of the objective
function (when maximizing, or decreases it when minimizing).
Optimal Solution
A globally optimal solution is one where there are no other feasible solutions
with better objective function values.
A locally optimal solution is one where there are no other feasible solutions "in
the vicinity" with better objective function values -- you can picture this as a
point at the top of a "peak" or at the bottom of a "valley" which may be formed
by the objective function and/or the constraints. The Solver is designed to find
optimal solutions -- ideally the global optimum -- but this is not always
possible.
8
Optimization Technique
the variables and the objective function and constraints (and the solution
algorithm used).
Solver models can be easy or hard to solve. "Hard" models may require a great
deal of CPU time and random-access memory (RAM) to solve -- if they can be
solved at all. The good news is that, with today's very fast PCs and advanced
optimization software from Frontline Systems, a very broad range of models
can be solved.
Mathematical Model:
9
Optimization Technique
or CHOOSE function that depends on the variables can turn a simple linear
model into an extremely difficult or even unsolvable non smooth model.
A few advanced solvers can break down a problem into linear, smooth
nonlinear and non-smooth parts and apply the most appropriate method to each
part -- but in general, you should try to keep the mathematical relationships in a
model as simple (i.e. close to linear) as possible.
Below are some general statements about solution times on modern Windows
PCs (with, say, 2GHz CPUs and 512MB of RAM), for problems without integer
variables
Linear Programming Problems -- where all of the relationships are linear, and
hence convex -- can be solved up to hundreds of thousands of variables and
constraints, given enough memory and time. Models with tens of thousands of
variables and constraints can be solved in minutes (sometimes in seconds) on
modern PCs. You can have very high confidence that the solutions obtained are
globally optimal.
If the problem is convex, you can have very high confidence that the solutions
obtained are globally optimal. If the problem is non-convex, you can have
reasonable confidence that the solutions obtained are locally optimal, but not
globally optimal.
10
Optimization Technique
enough memory and time. You can only have confidence that the solutions
obtained are "good" (i.e. better than many alternative solutions) -- they are not
guaranteed to be globally or even locally optimal.
Model Size
The size of a solver model is measured by the number of decision variables and
the number of constraints it contains.
Most optimization software algorithms have a practical upper limit on the size
of models they can handle, due to either memory requirements or numerical
stability.
Sparsity
Most large solver models are sparse. This means that, while there may be
thousands of decision variables and thousands of constraints, the typical
constraint depends on only a small subset of the variables. For example, a
linear programming model with 10,000 variables and 10,000 constraints could
have a "coefficient matrix" with 10,000 x 10,000 = 100 million elements, but in
practice, the number of nonzero elements is likely to be closer to 2 million.
Integer Variables
Integer variables (i.e. decision variables that are constrained to have integer
values at the solution) in a model make that model far more difficult to solve.
Memory and solution time may rise exponentially as you add more integer
variables. Even with highly sophisticated algorithms and modern
supercomputers, there are models of just a few hundred integer variables that
have never been solved to optimality.
This is because many combinations of specific integer values for the variables
must be tested, and each test requires the solution of a "normal" linear or
nonlinear optimization problem. The number of combinations can rise
exponentially with the size of the problem. "Branch and bound" and "branch
and cut" strategies help to cut down on this exponential growth, but even with
these strategies, solutions for even moderately large mixed-integer
programming (MIP) problems can require a great deal of time.
11
Optimization Technique
Solving and obtaining the optimum values is the last phase in design
optimization.
A number of general methods for solving the programmed optimization
problems relate
Various relations and constraints that describe the process to their effect on
objective function. these examine the effect of variables on the objective function
using analytic, graphical, and algorithmic techniques based on the principles of
optimization and programming methods.
There are many cases in which the factor being minimized (or maximized) is an analytic
function of a single variable. The procedure then becomes very simple. Consider the ex-
ample, where it is necessary to obtain the insulation thickness that gives the least total
cost. The primary variable involved is the thickness of the insulation, and
relationships can be developed showing how this variable affects all costs.
Cost data for the purchase and installation of the insulation are available, and the
length of service life can be estimated. Therefore, a relationship giving the effect of
insulation thickness on fixed charges can be developed. Similarly, a relationship
showing the cost of heat lost as a function of insulation thickness can be obtained
from data on the thermal properties of steam, properties of the insulation, and heat-
transfer con-siderati6"ns. All other costs, such as maintenance and plant expenses,
can be assumed to be independent of the insulation thickness.
variables x and y, or
CT = f(x,y)
12
Optimization Technique
By analyzing all the costs involved and reducing the resulting relationships to a simple
form, the following function might be found.
CT = ax +b/ xy + cy+d
Where a, b, c, and d are positive constants.
Graphical Procedure: The relationship among CT, X , and y could be shown as a
curved surface in a three-dimensional plot, with a minimum value of CT occurring at
the optimum values of x and v. However, the use of a three-dimensional plot is not
practical for most engineering determinations.
Analytical Procedure:
The optimum value of x is found at the point where (dCT/dx)y=yi , is equal to zero.
Similarly, the same results would be obtained if y were used as the abscissa instead of x.
If this were done, the optimum value of y (that is yi) would be found at the point where
(dCr/dy)x=xi., is equal to zero. This immediately indicates an analytical procedure for
determining optimum values.
method, methods, or combination of methods as the basic principals for the algorithm
function. It also requires the provision of the objective functions and constraints, either
as directly provided relations or from computer simulation models. The algorithm then
uses the basic programming approach to solve the optimization problem set by the
objective function and constraints.
13
Optimization Technique
To develop this form of approach for linear programming solutions, a set of linear
inequalities which form the constraints are written in the form of "equal to or less
than" equations as
a 21v1 a 22 v2 ... a 2 n vn b2
................................................
a m1v1 a m 2 v2 ... a mn vn bm
................................................
a v
j 1
ij j bi i = 1,2,….,m
for
v j 0 j=1,2,……,n
where i refers to rows (or equation number) in the set of inequalities and j refers
to columns (or variable number).
As indicated earlier, these inequalities can be changed to equalities
by adding a set of slack variables vn1 ,...., vn m (here v is used in place of S to
simplify the generalized expressions), so that
14
Optimization Technique
(a v
j 1
ij j vni ) bi i= 1,2,…,m
for
v j ≥0 j=1,2,…..,n+m
In addition to the constraining equations, there is an objective function for the
linear program which is expressed in the form of
z=maximum(or minimum) of c1v1 c2v2 .... c j v j ....cn vn
where the variables v j are subject to v j ≥0(j=1,2,….,n+m). Note that, in this
case, all the variables above vn are slack variables and provide no direct
contribution to the value of the objective function.
Simplex Algorithm:
The basis for the simplex method is the generation of extreme-point solutions by
starting at any one extreme point for which a feasible solution is known and then
proceeding to a neighboring extreme point. Special rules are followed that cause the
generation of each new extreme, point to be an improvement toward the desired
objective function. When the extreme point is reached where no further im-
provement is possible, this will represent the desired optimum feasible solution. Thus,
the simplex algorithm is an iterative process that starts at one extreme-point feasible
solution, tests this point for optimality, and proceeds toward an improved solution. If
an optimal solution exists, this algorithm can be shown to lead ultimately and effi-
ciently to the optimal solution.
The stepwise procedure for the simplex algorithm is as follows (based on the
optimum being a maximum):
1. State the linear programming problem in standard equality form.
2. Establish the initial feasible solution from which further iterations can proceed. A
common method to establish this initial solution is to base it on the values of the
slack variables, where all other variables are assumed to be zero. With this as-
sumption, the initial matrix for the simplex algorithm can be set up with a column
showing those variables that will be involved in the first solution. The coefficient
for these variables appearing in the matrix table should be 1 with the rest of column
being 0.
3. Test the initial feasible solution for optimality. The optimality test is accomplished
by the addition of rows to the matrix which give a value of Zj for each column,
15
Optimization Technique
where Zj is defined as the sum of the objective function coefficient for each solu-
tion variable.
Special cases:
If the initial solution obtained by use of the method given in the preceding is not
feasible, a feasible solution can be obtained by adding more artificial variables
which must then be forced out of the final solution.
Degeneracy may occur in the simplex method when the outgoing variable is
selected. If there are two or more minimal values of the same size, the problem is
degenerate, and a poor choice of the outgoing variable may result in cycling,
although cycling almost never occurs in real problems. This can be eliminated, by
a method of multiplying each element in the rows in question by the positive
coefficients of the kth column and choosing the row for the outgoing variable as
the one first containing the smallest algebraic ratio.
16
Optimization Technique
The preceding method for obtaining a maximum as the objective function can
be applied to the case when the objective function is a minimum by recognizing
that maximizing the negative of a function is equivalent to minimizing the
function.
References
www.naiopva.org
books.google.com.pk
Plant Design and Economics for Chemical Engineers
17