Structural Optimization Using Evolutionary Algorithms PDF
Structural Optimization Using Evolutionary Algorithms PDF
www.elsevier.com/locate/compstruc
Abstract
The objective of this paper is to investigate the efficiency of various evolutionary algorithms (EA), such as genetic
algorithms and evolution strategies, when applied to large-scale structural sizing optimization problems. Both type of
algorithms imitate biological evolution in nature and combine the concept of artificial survival of the fittest with
evolutionary operators to form a robust search mechanism. In this paper modified versions of the basic EA are im-
plemented to improve the performance of the optimization procedure. The modified versions of both genetic algorithms
and evolution strategies combined with a mathematical programming method to form hybrid methodologies are also
tested and compared and proved particularly promising. The numerical tests presented demonstrate the computational
advantages of the discussed methods, which become more pronounced in large-scale optimization problems. 2002
Elsevier Science Ltd. All rights reserved.
Keywords: Structural optimization; Genetic algorithms; Evolution strategies; Handling of constraints; Sequential quadratic
programming
0045-7949/02/$ - see front matter 2002 Elsevier Science Ltd. All rights reserved.
PII: S 0 0 4 5 - 7 9 4 9 ( 0 2 ) 0 0 0 2 7 - 5
572 N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589
optimization methods and may consume a large part of large-scale and computationally intensive optimization
the total computational effort [9]. On the other hand, the problems.
application of EA that are based on probabilistic
searching, such as GA and ES, do not need gradient
information and therefore avoid performing the com- 2. Genetic algorithms
putationally expensive sensitivity analysis step.
Mathematical programming methods, such as the GA are probably the best-known EA, receiving sub-
sequential quadratic programming (SQP) approach, stantial attention in recent years. The first attempt to use
make use of local curvature information derived from EA took place in the sixties by a team of biologists [10]
linearization of the original functions by using their and was focused in building a computer program that
derivatives with respect to the design variables. The would simulate the process of evolution in nature.
linearization is performed at points obtained in the However, the GA model used in this study and in many
process of optimization to construct an approximate other structural design applications refers to a model
model of the initial problem. These methods present a introduced and studied by Holland and co-workers [4].
satisfactory local rate of convergence, but they cannot In general the term genetic algorithm refers to any pop-
assure that the global optimum can be found. They do ulation-based model that uses various operators (selec-
although assure if the problem is strictly convex. On the tion–crossover–mutation) to evolve. In the basic genetic
other hand, EA are in general more robust and present a algorithm each member of this population will be a bi-
better global behaviour than the mathematical pro- nary or a real valued string, which is sometimes referred
gramming methods. They may suffer, however, from a to as a genotype or, alternatively, as a chromosome.
slow rate of convergence towards the global optimum
and do not guarantee convergence to the global opti- 2.1. The basic genetic algorithms
mum.
Structural optimization problems are characterized 2.1.1. The three main steps of the basic GA
by various objective and constraint functions, which are Step 0 initialization: The first step in the implemen-
generally non-linear functions of the design variables. tation of any genetic algorithm is to generate an initial
These functions are usually implicit, discontinuous and population. In most cases the initial population is gen-
non-convex. The mathematical formulation of structural erated randomly. In this study in order to perform a
optimization problems with respect to the design vari- comparison between various optimization techniques
ables, the objective and constraint functions depends on the initial population is fixed and is chosen in the neigh-
the type of the application. However, all optimization borhood of the initial design used for the mathematical
problems can be expressed in standard mathematical programming method. After creating an initial popula-
terms as a non-linear programming problem (NLP), tion, each member of the population is evaluated by
which in general form can be stated as follows: computing the representative objective and constraint
functions and comparing it with the other members of
min F ðsÞ the population.
subject to hj ðsÞ 6 0 j ¼ 1; . . . ; m ð1Þ Step 1 selection: Selection operator is applied to the
with sli 6 si 6 sui i ¼ 1; . . . ; n current population to create an intermediate one. In the
first generation the initial population is considered as
where, s is the vector of design variables, F ðsÞ is the the intermediate one, while in the next generations this
objective function to be minimized, hj ðsÞ are the be- population is created by the application of the selection
havioral constraints, sli and sui are the lower and the operator.
upper bounds of a typical design variable si . Equality Step 2 generation (crossover–mutation): In order to
constraints are rarely imposed in this type of problems create the next generation, crossover and mutation op-
except in some cases for design variable linking. erators are applied to the intermediate population to
Whenever they are used they are treated for simplicity as create the next population. Crossover is a reproduction
a set of two inequality constraints. operator, which forms a new chromosome by combining
In this work the efficiency of various EA is investi- parts of each of the two parental chromosomes. Muta-
gated in structural sizing optimization problems. Fur- tion is a reproduction operator that forms a new chro-
thermore, in order to benefit from the advantages of mosome by making (usually small) alterations to the
both methodologies, combinations of EA with SQP are values of genes in a copy of a single parent chromosome.
also examined in an attempt to increase further the ro- The process of going from the current population to the
bustness as well as the computational efficiency of the next population constitutes one generation in the evo-
optimization procedure. The numerical tests presented lution process of a genetic algorithm. If the termination
demonstrate the computational advantages of the dis- criteria are satisfied the procedure stops, otherwise, it
cussed methods, which become more pronounced in returns to step 1.
N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589 573
2.2. Micro genetic algorithms The methods based on the use of penalty functions are
employed in the majority of cases for treating constraint
The micro genetic algorithm (lGA) was introduced optimization problems with GA. In this study methods
by Krishnakumar [11] and applied to simple mathe- belonging to this category have been implemented and
matical test functions and to the wind shear optimal will be briefly described in the following section.
guidance problem. The main objective of this scheme is
to reduce the size of the population compared to the 3.1. Method of static penalties
basic one. This corresponds, in the case of structural
optimization problems discretized with finite elements, In the method of static penalties the objective func-
to less finite element analyses per generation. It is a tion is modified as follows:
known fact that GA generally exhibit poor performance ðnÞ
with very small population due to insufficient informa- F ðsÞ; if s 2 F
F 0 ðsÞ ¼ ð2Þ
tion processed and premature convergence to non-opti- F ðnÞ ðsÞ þ p violðnÞ ðsÞ; otherwise
mal results. A remedy to this problem, suggested by
Goldberg [12], could be to restart the evolution process where p is the static penalty parameter, violðnÞ ðsÞ is the
in case of nominal convergence with a new initial pop- sum of the violated constraints and F ðnÞ ðsÞ is the objec-
ulation which will include the best solution already tive function to be minimized, both normalized in [0,1],
achieved. Based on this suggestion Krishnakumar pro- while F is the feasible region of the design space.
posed the lGA which can be described by the following X
m
Michalewicz [14] the best individual was found in early are evaluated using the ph penalty parameter, while the
generations. remaining designs N þ 1; . . . ; 2N are evaluated using the
p‘ penalty parameter.
3.3. Augmented Lagrangian method Step 1 selection: An intermediate population of size
N is created by selecting the best individuals from the
The Augmented Lagrangian method (AL-GA) was two populations.
proposed by Adeli and Cheng [15,16]. According to Step 2 generation: Generate N offsprings using the
this method the constrained problem is transformed basic operators mutation and crossover. The parents are
to an unconstrained problem, by introducing two sets evaluated using the ph penalty parameter while the off-
of penalty coefficients c½ðc1 ; c2 ; . . . ; cMþN Þ and l½ðl1 ; springs using the p‘ . The process is then repeated by
l2 ; . . . ; lM þN Þ . The modified objective function, for the returning to Step 1.
generation g, is defined as follows:
8 This version was used in [18] for the minimal weight
<X N design problem of a composite laminated plate.
1 1 ðgÞ ðgÞ
F 0 ðs; c; lÞ ¼ F ðsÞ þ c ½ðqj
1 þ lj Þþ 2
Lf 2 : j¼1 j
" !þ #2 9
XM d = 4. Evolution strategies
ðgÞ j
þ cjþN a
1 þ lðgÞ ð6Þ
d j jþN
;
j¼1
4.1. Basic evolution strategies
where Lf is a factor for normalizing the objective func-
In the majority of cases ES were applied to contin-
tion; qj is a non-dimensional ratio related to the stress
uous optimization problems. In engineering practice the
constraints of the jth element group (see Eqs. (18) and
design variables are not continuous because usually the
(19)); dj is the displacement in the direction of the jth
structural parts are constructed with certain variation of
examined degree of freedom, while dja is the corre-
their dimensions. Thus design variables can only take
sponding allowable displacement; N, M correspond to
values from a predefined discrete set. For the solution of
the number of stress and displacement constraint func-
discrete optimization problems Thierauf and Cai [19]
tions, respectively. Furthermore
have proposed a modified ES algorithm. The basic dif-
ðgÞ ðgÞ
ðqj
1 þ lj Þþ ¼ maxðqj
1 þ lj ; 0Þ ð7Þ ferences between discrete and continuous ES are focused
on the mutation and the recombination operators. The
!þ ! multi-membered ES adopted in the current study uses
d d
j
1 þ lðgÞ ¼ max j
1 þ lðgÞ ð8Þ three operators: recombination, mutation and selection
dja jþN dja jþN ; 0
operators that can be included in the algorithm as fol-
lows:
The penalty coefficients are updated at each generation Step 1 (recombination and mutation): The population
ðgþ1Þ ðgÞ ðgÞ
according to the expressions cj ¼ b cj and lj ¼ of l parents at gth generation produces k offsprings. The
ðgÞ ðgþ1Þ ðgÞ ðgÞ ðgÞ
lj =b, where lj ¼ lj þ max½conj;ave ;
lj and genotype of any descendant differs only slightly from
ðgÞ
conj;ave is the average value of the jth constraint function that of its parents. For every offspring vector a tempo-
for the gth generation, while the initial values of c’s and rary parent vector ~s ¼ ½~s1 ; ~s2 ; . . . ; ~sn T is first built by
l’s are set equal to three and zero, respectively. Coeffi- means of recombination. For discrete problems the
cient b is taken equal to ten as recommended by Be- following recombination cases can be used
legundu and Arora [17]. 8
>
> sa;i or sb;i randomly ðAÞ
>
>
3.4. Segregated genetic algorithm < sm;i or sb;i randomly ðBÞ
~si ¼ sbj;i ðCÞ ð9Þ
>
>
The basic idea of the segregated GA (S-GA) [18] is > sa;i or sbj;i randomly ðDÞ
>
:
to use, as in the method of static penalties, two static sm;i or sbj;i randomly ðEÞ
penalty parameters instead of one. The two values of the
penalty parameters are associated with two populations ~si is the ith component of the temporary parent vector ~s,
that have a different level of satisfaction of the con- sa;i and sb;i are the ith components of the vectors sa and
straints. Each of the groups corresponds to the best sb which are two parent vectors randomly chosen from
performing individuals with respect to the associated the population. The vector sm is not randomly chosen
penalty parameter. The S-GA can be described as fol- but is the best of the l parent vectors in the current
lows: generation. In case C of Eq. (9), ~si ¼ sbj;i means that the
Step 0 initialization: Random generation of 2N de- ith component of ~s is chosen randomly from the ith
signs. The objective functions of the designs 1; 2; . . . ; N components of all l parent vectors. From the temporary
N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589 575
parent ~s an offspring can be created following the mu- tion value and the mean value of the objective function
tation operator. values from all parent vectors in the current generation
Let as consider the temporary parent sðgÞp of the is less than a given value ec ð¼ 0:0001Þ, (iv) when the
generation g that produces an offspring sðgÞ
o through the ratio lb =l has reached a given value ed ð¼ 0:5–0:8Þ where
mutation operator as follows lb is the number of the parent vectors in the current
generation with the best objective function value.
sðgÞ ðgÞ
o ¼ sp þ z
ðgÞ
ð10Þ
ðgÞ ðgÞ T
4.2. Contemporary ES––the (l,k,h) evolution strategies
where zðgÞ ¼ ½z1 ; z2 ; . . . ; zðgÞ
n is a random vector. Mu-
tation is understood to be random, purposeless events, This is a more general ES version, which was pro-
which occur very rarely. The fact that the difference posed by Schwefel and Rudolph [20] for application in
between any two adjacent values can be relatively large continuous problems but has not been applied either
is against the requirement that the variance r2i should be to continuous or to discrete optimization problems [21].
small. For this reason it is suggested [19] that not all the The two schemes of the multi-membered evolution
components of a parent vector, but only a few of them strategy, namely the ðl þ kÞ-ES and the ðl; kÞ-ES, differ
(e.g. ‘), should be randomly changed in every genera- in the way the parents of a new generation are selected.
tion. This means that n
‘, components of the ran- So far, only empirical results have shown that the ‘plus’
domly changed vector zðgÞ will have zero value. In other version performs better in structural optimization
words, the terms of vector zðgÞ are derived from problems [7,19].
The ðl; kÞ-ES version is in danger to diverge because
ðgÞ ðj þ 1Þdsi for ‘ randomly chosen components the so far best position is not preserved within the gen-
zi ¼
0 for n
‘ other components eration cycle (the so-called non-elitist strategy). The
ð11Þ ‘comma’ version implies that each parent can have
children only once (duration of life: one generation or
where dsi is the difference between two adjacent values in one reproduction cycle), whereas in the ‘plus’ version
the discrete set and j is a random integer number, which individuals may live eternally if no child achieves a
follows the Poisson distribution better or at least the same improvement in the objective
function value. The contemporary ES (C-ES) introduce
ðcÞj
c a maximal life span of h P 1 reproduction cycles which
pðjÞ ¼ e ð12Þ gives the ‘comma’ scheme for h ¼ 1 and the ‘plus’ one
c!
for h ¼ 1. If l P 1 is the number of parents, k > l is the
number of offspring, then q with 1 6 q 6 l is the number
c is the standard deviation as well as the mean value of
of ancestors for each descendant. This ES version differs
the random number j. The choice of ‘ depends on the
in two points from the basic one: (i) free number of
size of the problem and it is usually taken as 1=5 of the
parents are involved in reproduction ranging from 1 to
total number of design variables. The ‘ components are
l, (ii) a finite number of reproduction cycles per indi-
selected using uniform random distribution in every
vidual is performed, not one (1) or infinite (1) for the
generation according to Eq. (11).
‘comma’ and the ‘plus’ schemes, respectively. The se-
Step 2 (selection): There are two different types of
lection, mutation and recombination operators used in
the multi-membered ES:
the C-ES are the same as described in the section of the
basic evolution strategies.
ðl þ kÞ-ES: The best l individuals are selected from a
temporary population of ðl þ kÞ individuals to form
the parents of the next generation. 4.3. Adaptive ES
ðl; kÞ-ES: The l individuals produce k offsprings
ðl 6 kÞ and the selection process defines a new popula- The handling of the constraints by the basic ES is
tion of l individuals from the set of k offsprings only. based on the death penalty approach [22], where every
infeasible design point is discarded. Thus the process is
For discrete optimization the procedure terminates directed to search only in the feasible region of the de-
when one of the following termination criteria is satis- sign space. Due to this approach many designs that are
fied: (i) when the best value of the objective function in examined by the optimizer during the search process and
the last 4nl=k generations remains unchanged, (ii) when are close to the acceptable design space are rejected
the mean value of the objective values from all parent leading to the loss of valuable information. The idea
vectors in the last 2nl=k generations has not been im- introduced in this work is to use soft constraints dur-
proved by less than a given value eb ð¼ 0:0001Þ, (iii) when ing the first stages of the search and as the search
the relative difference between the best objective func- approaches the region of the global optimum the
576 N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589
constraints to become more severe until they reach their 8. Selection step: Selection of the next generation par-
real values. ents according to (l þ k) or (l; k) selection schemes.
The implementation of adaptive ES (A-ES) in a 9. Convergence check: If satisfied stop, else return to
structural optimization problem is straightforward and Step 3.
follows the same steps described in the section of the
basic ES. The ES optimization procedure starts with a 4.4. An academic example
population of parent vectors, while a level of violation of
the constraints is determined. If any of these parents As an example for explaining the process of ES we
corresponds to an infeasible design lying outside the consider the three-bar truss shown in Fig. 1, where the
extended design space then this parent is modified until minimum volume is required to support a force P .
it becomes ‘feasible’. Then the offsprings are generated This structure has been used as a test bed in the lit-
and they are also checked if they are in the ‘feasible’ erature of structural optimization [23] and must satisfy
region according the current level of violation. In every various constraints, such as member crushing, member
generation the values of the objective function are buckling and failure by excessive deflection of node 4.
compared between the parent and the offspring vectors The final design of the structure must be symmetric.
and the worst vectors are rejected, while the remaining Therefore, the following design variables are defined:
ones are considered to be the parent vectors of the new A1 ¼ cross-sectional area of material for members l and
generation. This procedure is repeated until the chosen 3, and A2 ¼ cross-sectional area of material for member
termination criterion is satisfied. 2. The relative merit of any design for the problem is
In this adaptive scheme a nominal convergence check measured in its material volume. Thus, the total material
is adopted for the determination of the level of violation volume for the structure serves as an objective function:
of constraints. Nominal convergence occurs when the pffiffiffi
mean value of the objective function of the designs of the volume ¼ Lð2 2A1 þ A2 Þ
current population is relatively close to the best design
achieved until the current generation, according to the where L is defined in Fig. 1.
expression To define the constraint functions for the problem,
stresses and deflections for the structure are calculated.
ðgÞ ðgÞ Using analysis procedures for statically indeterminate
F
Fbest
6 ead ð13Þ structures, horizontal and vertical displacements u and v,
ðgÞ
F respectively, of the node 4 are given by
pffiffiffi
ðgÞ
where F is the mean objective function value and Fbest
ðgÞ 2LPu
u¼
is the best objective function value of all parents in the A1 E
gth generation, where ead ¼ 0:05.
The A-ES steps can be stated as follows: pffiffiffi
2LPv
v¼ pffiffiffi
ðA1 þ 2A2 ÞE
1. Initialization step: Selection of si , (i ¼ 1; 2; . . . ; l) par-
ent vectors of the design variables and the percentage where E is the modulus of elasticity for the material,
of violation of the constraints v0 (usually taken be- while Pu and Pv are the horizontal and vertical compo-
tween 20% and 50%). nents of the load P, respectively: Pu ¼ P cos h, Pv ¼
2. Analysis step: Solve Kðsi Þui ¼ f (i ¼ 1; 2; . . . ; l),
where K is the stiffness matrix of the structure and f
is the loading vector.
3. Constraints check: All parent vectors become ‘‘feasi-
ble’’, within the prescribed level of constraints viola-
tion v0 .
4. Offspring generation: Generate sj , (j ¼ 1; 2; . . . ; k) off-
spring vectors of the design variables.
5. Analysis step: Solve Kðsj Þuj ¼ f (j ¼ 1; 2; . . . ; k).
6. Nominal convergence check: If nominal convergence
has occurred the level of violation vg becomes more
severe by reducing its value by the quantity b (usually
0.1 or 0.2).
7. Constraints check: If satisfied according to the current
level of violation vg continue, else change sj and re-
turn to Step 4. Fig. 1. Symmetric three-bar truss.
N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589 577
Fi 6
‘2i population (cm2 )
The negative sign for Fi is used to make the left-hand 1 (50.7, 50.7) 24.0
side of the constraints negative when the member is in 2 (71.0, 65.9) 33.0
tension since there is no need to impose buckling con- 3 (71.0, 71.0) 33.7
straints for members in tension. Substituting various 4 (40.6, 60.8) 21.7
quantities, member buckling constraints take the form 5 (40.6, 35.5) 18.6
" #
1 Pu Pv p2 EbA1
semi-analytical one but it is computationally more ef- based on an SQP algorithm from the NAG library [32].
ficient [27,28]. The primal algorithm is divided into three phases: (i) the
(iii) Analytical method: The finite element equations, solution of the QP subproblem to obtain the search di-
the objective and constraint functions are differenti- rection, (ii) the line search along the search direction,
ated analytically. (iii) the update of the Hessian matrix H.
Once the direction vector is found a line search is
The decision on which method to implement depends performed, involving only the non-linear constraints,
strongly on the type of problem, the structure of the in order to produce a ‘‘sufficient decrease’’ to the merit
computer program and the access to the source code. function u. This merit function is an augmented La-
The implementation of analytical and semi-analytical grangian function of the form [31]
methods is more complex and requires access to the X 1X
source code, whereas when a finite difference method is u ¼ F ðsÞ
ki ðgi ðsÞ
ci Þ þ pi ðgi ðsÞ
ci Þ2
applied the formulation is much simpler and the sen- i
2 i
sitivity coefficients can be easily evaluated even with ð16Þ
general purpose commercial codes. In the present in-
vestigation both the global finite difference method and where ci are the non-negative slack variables of the in-
the semi-analytical method have been used. equality constraints derived from the solution of the QP
subproblem. These slack variables allow the active in-
5.2. Sequential quadratic programming equality constraints to be treated as equalities and avoid
possible discontinuities. Finally, pi are the penalty pa-
SQP methods are the standard general purpose rameters which are initially set to zero and in subsequent
mathematical programming algorithms for solving NLP iterations are increased whenever this is necessary in
optimization problems [29]. Such methods make use of order to control the violation of the constraints and to
local curvature information derived from linearization ensure that the merit function follows a descent path
of the original functions, by using their derivatives with [30].
respect to the design variables at points obtained in the To implement SQP in discrete optimization prob-
process of optimization. Thus a quadratic programming lems, a continuous SQP step is performed first at each
(QP) model (or subproblem) is constructed from the step. Then the current continuous design reached is
initial NLP problem. A local minimizer is found by projected to the nearest discrete values of the design
solving a sequence of these QP subproblems using a space. The current continuous design is considered as a
quadratic approximation of the objective function. Each lower bound of the projected discrete one [33].
subproblem has the following form
1 T
minimize 2
p Hp þ gT p 6. The hybrid approach
with Ap þ hðsÞ 6 0 ð14Þ
s‘ 6 p 6 su Hybrid methods which combine evolutionary com-
putation techniques with deterministic procedures for
where p is the search direction subjected to upper and lower numerical optimization problems have been recently
bounds, g is the gradient of the objective function, A is the investigated. Papadrakakis et al. [34] used evolution
Jacobian of the constraints, usually the active ones strategies with the SQP method, while Waagen et al. [35]
only (i.e. those that are either violated, or not far from combined EP with the direction set method of Hooke
being violated), s‘ ¼ s‘
s, su ¼ su
s and H is an ap- and Jeeves [36]. The hybrid implementation proposed in
proximation of the Hessian matrix of the Lagrangian [34] was found very successful on shape optimization
function test examples, while the method proposed in [35] was
Lðs; kÞ ¼ F ðsÞ þ khðsÞ ð15Þ applied to unconstrained mathematical test functions.
Myung et al. [37] considered a similar to Waagen
In Eq. (15) k are the Lagrange multipliers under the non- et al. approach, but they experimented with constrained
negativity restriction (k P 0) for the inequality con- mathematical test functions. Myung et al. combined a
straints. In order to construct the Jacobian and the floating-point EP technique, with a method-developed
Hessian matrices of the QP subproblem the derivatives by Maa and Shanblatt [38] applied to the best solution
of the objective and constraint functions are required. found by the EP technique. The second method iterates
These derivatives are computed during the sensitivity until the system defined by the combination of the ob-
analysis phase. jective function, the constraint functions and the design
There are two ways to solve this QP subproblem, variables reach equilibrium.
either with a primal [30], or with a dual [31] formulation. A characteristic property of the SQP based optimiz-
In the present study a primal algorithm is employed ers is that they capture very fast the right path to the
580 N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589
fjþ1
fj
6e ð17Þ
fj
Fby , Fbz are the allowable bending stresses for y- and z- Table 2
axis, respectively, and ry is the yield stress of the steel. Test example 1––performance of the two selection schemes
The allowable inter-storey drift is limited to 1.5% of the (l þ k)-ES and (l, k)-ES
height of each storey. One load case is considered in all Optimizer Weight (kN) FE analyses Time (s)
examples. (5 þ 5)-ES 675 416 177
The termination criterion adopted for both GA and (5,5)-ES 677 430 183
ES is the same for comparison reasons. If no improve- (10 þ 10)-ES 668 847 363
ment of the best value of the objective function has oc- (10,10)-ES 661 834 357
curred in the last six generations, then the optimization
procedure is terminated.
Example 1: The first example is a six-storey space performance of the methods for handling the constraints
frame, first studied by Orbinson et al. [40], with 63 ele- is measured with two parameters: the weight achieved
ments and 180 degrees of freedom. The beams length is and the average level of violation of the constraint
L1 ¼ 7:32 m and the columns height is L2 ¼ 3:66 m. The functions. The average percentage violation of the con-
loads consisting of 17 kPa gravity load on all floor levels straint functions is abbreviated to Avg. Const. Violation
and a lateral load of 100 kN applied at each node in the (%). Fig. 4 depicts the performance of GA with the
front elevation in the z direction. The element members method of static penalties for handling the constraints.
are divided into five groups shown in Fig. 3 and the total It can be seen that the performance of the method is very
number of design variables is 10. The initial design used sensitive to the value of the penalty parameter, while
in this example was chosen away from the optimum there is not a rule of thumb on how to choose this single
corresponding to the weight of 2846 kN for every test. penalty parameter. If it is chosen too small the search
Table 2 shows the performance of the two types of the will converge to an infeasible solution otherwise if it is
basic multi-membered ES, namely (l þ k)-ES and (l; k)- chosen too large a feasible solution may be located but it
ES for l ¼ k ¼ 5 and l ¼ k ¼ 10. The best design would be far from the global optimum. Fig. 5 presents
achieved by these tests is used as the basis for compar- the performance of GA with the method of dynamic
ison in subsequent applications of the optimization penalties for handling the constraints. It appears that the
methods examined. constraint violation percentage in this case is equal to
In Figs. 4–7 various techniques for handling the zero for all cases considered, since for high generation
constraints of the genetic algorithms are presented. The number the ðc gÞa component of the penalty term takes
Fig. 5. Test example 1––performance of GA with dynamic the system has little chances to escape from local op-
penalties (a ¼ 2, zero constr. violation).
tima.
Fig. 6 presents the performance of the segregated GA
large values, which makes even the slightly violated de- (S-GA) method for handling the constraints for different
signs not to be selected in subsequent generations. Thus, values of penalty parameters ph , p‘ . The results indicate
N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589 583
Table 4
Test example 1––performance of the AL-GA with the termi-
nation criterion adopted in this work
Lf Weight (kN) FE analyses Time (s)
Case a: 6l=k
100 1060 110 48
500 1060 110 48
1000 1060 110 48
Case b: 12l=k
100 1042 140 60
500 1042 140 60
1000 1042 140 60
Case c: 18l=k
100 1010 145 62
500 1010 145 62
1000 1010 145 62
Case d: 24l=k
100 1010 145 62
500 1010 145 62
1000 1010 145 62
Fig. 9. Test example 1––performance of the A-ES (b ¼ 0:1).
584 N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589
of violation of the constraints. The initial parameter ai Comparing the performance of the two sensitivity anal-
for the normalized constraint functions (Eqs. (18) and ysis methods it can be seen that the ESA sensitivity
(19)) is taken between 1.2 and 1.6, which correspond to a analysis method is faster than GFD method. For this test
percentage violation of the constraints between 20% and example the ESA and GFD sensitivity analysis methods
60%. In the case of Fig. 8 all designs achieved are fea- are used to compute the sensitivities and magnitude of
sible, while in the case of Fig. 9 the designs achieved by perturbation is equal to 10
5 .
the optimizers 2 and 5 are not feasible. The results of Example 2: The second example is the 20-storey
Figs. 8 and 9 indicate that the two new versions of ES space frame, studied by Papadrakakis and Papadopou-
manage to converge to better designs than the basic ES los [41], with 1020 members and 2400 degrees of free-
for a number of different parameters used at a marginal dom. The loads considered here are uniform vertical
increase of computational effort. The C-ES method forces applied at joints equivalent to uniform load of 4.8
appears to be slightly more robust than the adaptive kPa and horizontal forces equivalent to uniform forces
one, which may converge to infeasible designs when the of 1.0 kPa on the largest surface. The element members
parameter af is not equal to one. The A-ES, however, are divided into 11 groups shown in Fig. 10 and the total
manage to converge to the best optimum design of 620 number of design variables is 22. The initial design used
kN in 470 FE analyses for ead ¼ 0:005 and ai ¼ 1:6. in this example was chosen away from the optimum
The performance of the hybrid approaches is depicted corresponding to the weight of 42,248 kN for every test.
in Table 5. The results indicate that despite the fact that Table 6 shows the performance of the two types of
all GA methods, except of AL-GA, converge to infeasi- the multi-membered ES, namely (l þ k)-ES and (l; k)-
ble designs (the letter ‘v’ denotes constraint violation) the ES for l ¼ k ¼ 5 and l ¼ k ¼ 10. The (10 þ 10) version
SQP optimizer which follows manages to reach a feasible of ES manages to converge to the best design, which is
one at a competitive time. The best design found by A- used as reference design. In Figs. 11 and 12 various
ES-SQP and AL-GA-SQP optimizer is 20% better than techniques for handling the constraints by the GA are
the design achieved by SQP requiring 35% and 25% less presented. The average percentage violation of the
computational effort, respectively. We also examined two constraint functions is abbreviated as previously to Avg.
EA hybrid methods by combining AL-GA with ES and Const. Violation (%). It can be seen that all feasible so-
vice versa. Both perform well in terms of the design lutions achieved appear to be local optima although for
achieved and the required computing time. Furthermore, this example the GA with dynamic penalties showed a
the modified version of the lGA, which excludes all the rather more robust behaviour.
infeasible designs gave the same solution as the (l; k)-ES Tables 7 and 8 contain the results of the Augmented
with 25% less computational effort, while the modified Lagrangian GA method (AL-GA). The constraint vio-
micro GA (m lGA) achieves better performance in terms lation percentage is equal to zero for all tests considered.
of computing time compared to the (l þ k)-ES method. Table 7 depicts the results of the method with the ter-
Table 5
Test example 1––combination GA-SQP and ES-SQP
Optimizer Initial design for Final design Sensitivity FE analyses Time (s)
second optimizer (kN) analysis
EA SQP EA SQP Total
SQP – 795 GFD – 272 – 340 340
SQP – 795 ESA – 271 – 204 204
S-GA-SQP 708v 672 GFD 110 86 47 107 154
S-GA-SQP 708v 672 ESA 110 86 47 66 113
S-GA-SQP 635v 675 GFD 120 155 51 191 242
S-GA-SQP 635v 675 ESA 120 156 51 113 164
AL-GA-SQP 1010 663 GFD 145 121 62 155 217
AL-GA-SQP 1010 663 ESA 145 121 62 95 157
C-ES-SQP 912 681 GFD 212 114 90 142 232
C-ES-SQP 912 681 ESA 212 114 90 88 178
A-ES-SQP 829 663 GFD 136 96 58 120 178
A-ES-SQP 829 663 ESA 136 96 58 74 132
AL-GA-ES 1010 663 – 145 þ 207 – 151 – 151
ES-AL-GA 829 675 – 136 þ 257 – 177 – 177
m lGA – 677 – 319 – 154 – 154
N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589 585
Table 8
Test example 1––performance of the AL-GA with the termi-
nation criterion adopted in this work
Lf Weight (kN) FE analyses Time (s)
Case a: 6l=k
100 8073 120 1917
500 8073 120 1917
1000 8073 120 1917
Case b: 12l=k
100 7397 140 2231
500 7397 140 2231
1000 7397 140 2231
Case c: 18l=k
100 7015 195 3097
500 7015 195 3097
1000 7015 195 3097
Case d: 24l=k
100 7015 195 3097
500 7015 195 3097
1000 7015 195 3097
Table 7
Test example 1––performance of the AL-GA Fig. 13. Test example 2––performance of the C-ES.
Lf Weight (kN) FE analyses Time (s)
100 7015 195 3097 version of evolution strategies in terms of the achieved
500 7015 195 3097 optimum weight at the expense of more computing time.
1000 7015 195 3097 The performance of the hybrid approaches is de-
Termination criterion of Ref. [15]. picted in Tables 9–11. Three initial designs are consid-
N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589 587
8. Conclusions
Table 9
Test example 2––hybrid methods
Optimizer Initial design for Final de- Sensitivity Time (s) Time (s)
second optimizer sign (kN) analysis
EA SQP EA SQP Total
SQP – 6427 GFD – 435 – 21,932 21,932
SQP – 6427 ESA – 438 – 13,655 13,655
S-GA-SQP 7928v 5764 GFD 65 116 1100 5166 6266
S-GA-SQP 7928v 5764 ESA 65 117 1100 3608 4708
AL-GA-SQP 7015 6427 GFD 195 99 3301 4415 7716
AL-GA-SQP 7015 6427 ESA 195 99 3301 3077 6378
C-ES-SQP 7132 5834 GFD 193 152 3266 6770 10,036
C-ES-SQP 7132 5834 ESA 193 152 3266 4687 7953
A-ES-SQP 6531 5713 GFD 107 111 1811 4943 6754
A-ES-SQP 6531 5713 ESA 107 110 1811 3392 5202
AL-GA-ES 7015 5819 – 195 þ 65 – 4401 – 4401
m lGA – 5772 – 395 – 6117 – 6117
Bad initial design.
588 N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589
Table 10
Test example 2––hybrid methods
Optimizer Initial design for Final de- Sensitivity Time (s) Time (s)
second optimizer sign (kN) analysis
EA SQP EA SQP Total
SQP – 6119 GFD – 385 – 17,519 17,519
SQP – 6119 ESA – 387 – 12,097 12,097
S-GA-SQP 7669 5695 GFD 40 169 681 7391 8071
S-GA-SQP 7669 5695 ESA 40 169 681 5291 5972
AL-GA-SQP 6816 5695 GFD 155 98 2639 4361 7000
AL-GA-SQP 6816 5695 ESA 155 98 2639 3057 5696
C-ES-SQP 6835 5764 GFD 171 111 2901 4943 7844
C-ES-SQP 6835 5764 ESA 171 111 2901 3469 6370
A-ES-SQP 6319 5764 GFD 91 67 1552 2983 4535
A-ES-SQP 6319 5764 ESA 91 67 1552 2089 3641
AL-GA-ES 6816 5430 – 155 þ 73 – 3866 – 3866
m lGA – 5472 – 339 – 5243 – 5243
Medium initial design.
Table 11
Test example 2––hybrid methods
Optimizer Initial design for Final de- Sensitivity Time (s) Time (s)
second optimizer sign (kN) analysis
EA SQP EA SQP Total
SQP – 6119 GFD – 259 – 11,508 11,508
SQP – 6119 ESA – 259 – 8049 8049
S-GA-SQP 7669 5695 GFD 35 169 590 7391 7981
S-GA-SQP 7669 5695 ESA 35 169 590 5291 5881
AL-GA-SQP 6816 5695 GFD 115 98 1934 4361 6295
AL-GA-SQP 6816 5695 ESA 115 98 1934 3057 4991
C-ES-SQP 6835 5764 GFD 158 111 2685 4943 7628
C-ES-SQP 6835 5764 ESA 158 111 2685 3469 6154
A-ES-SQP 6275 5764 GFD 69 67 1198 2983 4181
A-ES-SQP 6275 5764 ESA 69 67 1198 2089 3287
AL-GA-ES 6816 5430 – 115 þ 73 – 3161 – 3161
m lGA – 5472 – 339 – 5243 – 5243
Good initial design.
to the fast convergence of GA towards the neigh- [2] Fogel DB. Evolving artificial intelligence. PhD thesis,
borhood of the optimum and the property of SQP University of California, San Diego, 1992.
to compute quickly the nearest optimum once in the [3] Goldberg DE. Genetic algorithms in search, optimization
neighborhood of the solution. However, the proposed and machine learning. Reading, Massachusetts: Addison-
Wesley Publishing Co., Inc.; 1989.
adaptive ES when coupled with SQP and the combina-
[4] Holland J. Adaptation in natural and artificial systems.
tion of the Augmented Lagrangian GA, as the first stage Ann Arbor, MI: University of Michigan Press; 1975.
optimizer followed by ES, proved to be the more effi- [5] Rechenberg I. Evolution strategy: optimization of techni-
cient optimization algorithms. cal systems according to the principles of biological
evolution. Stuttgart: Frommann-Holzboog; 1973 (in Ger-
man).
References [6] Schwefel HP. Numerical optimization for computer mod-
els. Chichester, UK: Wiley & Sons; 1981.
[1] Fogel LJ, Owens AJ, Walsh MJ. Artificial intelligence [7] Papadrakakis M, Lagaros ND, Thierauf G, Cai J.
through simulated evolution. New York: Wiley; 1966. Advanced solution methods in structural optimization
N.D. Lagaros et al. / Computers and Structures 80 (2002) 571–589 589
based on evolution strategies. J Eng Comput [25] Bletzinger KU, Kimmich S, Ramm E. Efficient modelling
1998;15(1):12–34. in shape optimal design. Comput Syst Eng 1991;2(5/
[8] Adeli H, Cheng NT. Concurrent genetic algorithms for 6):483–95.
optimization of large structures. J Aerospace Eng ASCE [26] Hinton E, Sienz J. Aspects of adaptive finite element
1994;7(3):276–96. analysis and structural optimization. In: Topping BHV,
[9] Papadrakakis M, Tsompanakis Y, Hinton E, Sienz J. Papadrakakis M, editors. Advances in Structural Optimi-
Advanced solution methods in topology optimization and zation. Edinburgh: Civil–Comp Press; 1994. p. 1–26.
shape sensitivity analysis. J Eng Comput 1996;13(5):57–90. [27] Olhoff N, Rasmussen J, Lund E. Method of exact
[10] Barricelli NA. Numerical testing of evolution theories. numerical differentiation for error estimation in finite
ACT A Biotheoretica 1962;16:69–126. element based semi-analytical shape sensitivity analyses.
[11] Krishnakumar K. Micro genetic algorithms for stationary Special Report No. 10, Institute of Mechanical Engineer-
and non stationary function optimization. SPIE Proceed- ing, Aalborg University, Aalborg, DK, 1992.
ings Intelligent control and adaptive systems, vol. 1196, [28] Hinton E, Sienz J. Studies with a robust and reliable
1989. structural shape optimization tool. In: Topping BHV,
[12] Goldberg DE. Sizing populations for serial and parallel editor. Developments in computational techniques for
genetic algorithms. TCGA Report No. 88004, University structural engineering. Edinburgh: Civil-Comp Press;
of Alabama, 1988. 1995. p. 343–58.
[13] Joines J, Houck C. On the use of non-stationary penalty [29] Gill PE, Murray W, Wright WH. Practical optimization.
functions to solve non-linear constrained optimization London: Academic Press; 1981.
problems with GA. In: Michalewicz Z, Schaffer JD, [30] Gill PE, Murray W, Saunders MA, Wright MH. User’s
Schwefel H-P, Fogel DB, Kitano H, editors. Proceedings guide for NPSOL (Version 4.0): A Fortran Package for
of the First IEEE International Conference on Evolution- Non-linear Programming. Technical Report SOL 86-2,
ary Computation. IEEE Press; 1994. p. 579–84. Department of Operations Research, Stanford University,
[14] Michalewicz Z. Genetic algorithms, numerical optimiza- 1986.
tion and constraints. In: Eshelman LJ, editor. Proceedings [31] Fleury C. Dual methods for convex separable problems.
of the 6th International Conference on Genetic Algo- In: Rozvany GIN, editor. NATO/DFG ASI optimization
rithms. Los Altos, CA: Morgan Kaufmann; 1995. p. 151–8. of large structural systems. Berchtesgaden, Germany,
[15] Adeli H, Cheng NT. Integrated genetic algorithm for Dordrecht, Netherlands: #132;lands; 1993. p. 509–30.
optimization of space structures. J Aerospace Eng ASCE [32] NAG, Software manual. NAG Ltd, Oxford, UK, 1988.
1993;6(4):315–28. [33] Thanedar PB, Vanderplaats GN. Survey of discrete vari-
[16] Adeli H, Cheng NT. Augmented Lagrangian genetic able optimization for structural design. J Struct Eng ASCE
algorithm for structural optimization. J Aerospace Eng 1995;121(2):301–6.
ASCE 1994;7(1):104–18. [34] Papadrakakis M, Tsompanakis Y, Lagaros ND. Structural
[17] Belegundu AD, Arora JS. A computational study of shape optimization using evolution strategies. Eng Optim
transformation methods for optimal design. AIAA J 1999;31:515–40.
1984;22(4):535–42. [35] Waagen D, Diercks P, McDonnell J. The stochastic
[18] Le Riche RG, Knopf-Lenoir C, Haftka RT. A segregated direction set algorithm: a hybrid technique for finding
genetic algorithm for constrained structural optimization. function extrema. In: Fogel DB, Atmar W, editors.
In: Eshelman LJ, editor. Proceedings of the 6th Interna- Proceedings of the 1st Annual Conference on Evolutionary
tional Conference on Genetic Algorithms, 1995. p. 558–65. Programming. Evolutionary Programming Society; 1992.
[19] Cai J, Thierauf G. Discrete structural optimization using p. 35–42.
evolution strategies. In: Topping BHV, Khan AI, editors. [36] Hooke R, Jeeves TA. Direct search solution of numerical
Neural networks and combinatorial in civil and structural and statistical problems. J ACM 1961;8.
engineering. Edinburgh: Civil–Comp Ltd; 1993. p. 95–100. [37] Myung H, Kim J-H, Fogel D. Preliminary investigation
[20] Schwefel H-P, Rudolph G. Contemporary evolution strat- into a two-stage method of evolutionary optimization on
egies. In: Morgan F, Moreno A, Merelo JJ, Chacion P, constrained problems. In: McDonnell JR, Reynolds RG,
editors. Advances in artificial life, Proceedings of the Fogel DB, editors. Proceedings of the 4th Annual Confer-
Third European Conference on Artificial Life Granada, ence on Evolutionary Programming. Cambridge, MA:
Spain, 4–6 June. Berlin: Springer; 1995. p. 893–907. MIT Press; 1995. p. 449–63.
[21] Rudolph G. Personal communication, 1999. [38] Maa C, Shanblatt M. A two-phase optimization neural
[22] Back T, Hoffmeister F, Schwefel H-P. A survey of network. IEEE Trans Neural Networks 1992;3(6):
evolution strategies. In: Belew RK, Booker LB, editors. 1003–9.
Proceedings of the 4th International Conference on [39] Eurocode 3, Design of steel structures, Part1.1: General
Genetic Algorithms. Los Altos, CA: Morgan Kaufmann; rules for buildings, CEN, ENV 1993-1-1/1992.
1991. p. 2–9. [40] Orbinson JG, McGuire W, Abel JF. Yield surface appli-
[23] Schmit LA. Structural design by systematic synthesis. cations in non-linear steel frames analysis. Comput Meth
Proceedings of the second ASCE Conference on Electronic Appl Mech Eng 1982;33:557–73.
Computations, Pittsburgh, 1960. p. 105–22. [41] Papadrakakis M, Papadopoulos V. A computationally
[24] Naylor TH, Balintfy JL. In: Computer simulation tech- efficient method for the limit elasto plastic analysis of space
niques. New York: John Wiley & Sons; 1966. p. 114. frames. Computat Mech 1995;16(2):132–41.