Reliability-Based Structural Design Optimization For Nonlinear Structures in OpenSees
Reliability-Based Structural Design Optimization For Nonlinear Structures in OpenSees
Hong Liang
B.Eng., Tongji University, China, 1993
M.Eng., Tongji University, China, 1996
THE F A C U L T Y OF G R A D U A T E STUDIES
Civil Engineering
Abstract
The aspiration of this thesis is to provide a tool for engineers in making rational decisions
based on the balance between cost and safety. This objective is accomplished by merging
the optimization and reliability analyses with sophisticated finite element models that
predict structural response. In particular, two state-of-the-art reliability-based design
optimization approaches are implemented in OpenSees, a modern and comprehensive
finite element software that has recently been extended with reliability and response
sensitivity analysis capabilities. These new implementations enable reliability-based
design optimization for comprehensive real-world structures that exhibit nonlinear
behaviour.
This thesis considers the problem of minimizing the initial cost plus the expected cost
of failure subject to reliability and structural constraints. This involves reliability terms in
both objective and constraint functions. In the two implemented approaches, the
reliability analysis and the optimization evaluation are decoupled, although they are not
bi-level approaches, thus allowing flexibility in the choice of the optimization algorithm
and the reliability method. Both solution approaches employ the same reformulation of
the optimization problem into a deterministic optimization problem. The decoupled
sequential approach using the method of outer approximation (DSA-MOOA) applies a
semi-infinite optimization algorithm to solve this deterministic optimization problem. An
important feature of the DSA-MOOA approach is that a convergence proof exists in the
first-order approximation. The simplified decoupled sequential approach (DSA-S)
utilizes an inequality constrained optimization algorithm to solve the deterministic
optimization problem. The DSA-S approach is demonstrated to result in a consistent
design, which lacks the convergence proof but requires less computational time than the
DSA-MOOA approach.
The gradients of the finite element response with respect to model parameters are
needed in reliability-based design optimization. These gradients are obtained using the
ii
elasticBeam
iii
Contents
List of Tables
vii
List of Figures
viii
ix
Acknowledgements
Introduction
17
20
22
25
26
28
28
28
32
34
35
3.2.2
36
40
41
44
iv
48
48
49
51
53
5.4.1
54
5.4.2
55
57
57
61
6.2.1
B1 - Inner Approximation
6.2.2
B2 - Constraints Expansion
66
6.2.3
B3 - Outer Approximation
68
64
71
74
79
79
7.1.1
81
7.1.2
88
7.1.3
dispBeamColumn
Element and
Section
Fiber
95
103
7.2.1
103
7.2.2
104
7.2.3
105
7.2.4
107
7.2.5
108
Conclusions
110
110
113
Bibliography
115
120
120
122
123
129
B. l RBDO Modeling
129
131
132
vi
List of Tables
81
Table 7.2 Definition and initial values of design variables for Cases 1 and 2
83
83
85
85
87
89
92
92
95
Table 7.9 Definition and initial values of design variables for Case 3
97
98
100
100
100
102
106
126
127
Table A.lc New and modified classes for extending RBDO (continued)
128
vii
List of Figures
23
29
30
41
42
43
Figure 4.4 Elastic-perfectly plastic material (FyP =0) and SteelOl material
44
45
51
Figure 5.2 Bi-linear steel material model smoothed with circular segment
54
62
Figure 6.2 ju solutions for inner approximation using the Polak-He algorithm
65
67
72
75
76
Figure 6.7 Interaction between optimization, reliability, and finite element module
77
80
82
86
88
Figure 7.5 Evolution of the total expected cost for objective functions for Case 2
93
94
96
101
viii
ix
Acknowledgements
I wish to express my deep appreciation to my supervisor Dr. Terje Haukaas for his
willingness to guide me through the challenging path towards a master degree. His
approach to research and teaching will always be a source of inspiration. He leads me
into the world of reliability and sensitivity analysis. His serious work manner and
optimistic life attitude will keep inspiring me on my personal and professional
development.
I am grateful to Dr. Johannes Ovrelid Royset for his patient and detailed explanation of
the optimization theory. His kind help makes my research work smooth and possible.
I would like to thank Dr. Sigi Stiemer for being willing to serve in my thesis
committee. His encouragement, and constructive, criticism are much appreciated. His
course gives me a lot of helps understanding the nonlinear and plastic theory of
structures.
I am thankful to my parents and brother for their encouragement and support. In
particular, I am indebted to Ling Zhu, who takes care of me as a wife, discusses my
research as a classmate and continuously gives me strength and confidence as a soul
mate.
xi
Chapter 1
Introduction
The finite element method is currently the leading-edge approach for numerical
simulations of structural behaviour. It is of considerable interest to incorporate
sophisticated finite element models into the RBDO analysis. Furthermore, this thesis
addresses the need for implementation of state-of-the-art optimization techniques in a
finite element code that is in widespread use. Flexible software architecture is required to
accommodate the extensive interaction between optimization, reliability, and finite
element modules of the software. OpenSees - open system for earthquake engineering
simulations - (McKenna et al., 2004), is ideal for this purpose. This is an object-oriented,
open-source software that is freely available from http://opensees.berkeley.edu. It serves
as the computational platform for the prediction of structural and geotechnical responses
for the Pacific Earthquake Engineering Research Center (PEER). Recently, OpenSees
was extended with reliability and response sensitivity analysis capabilities (Haukaas &
Der Kiureghian, 2004). This allows reliability analyses to be conducted in conjunction
with static and dynamic inelastic finite element analyses, with random material,
geometry, and load parameters.
A novelty of this thesis is the use of the object-oriented programming approach to
develop a library of software components (tools) for optimization analysis. This approach
provides a software framework that is easily extended and maintained. Indeed, the
decoupling optimization approaches considered in this thesis take advantage of the
object-oriented approach, in which solution algorithms for reliability and optimization
problems are readily substituted by future developed solution algorithms.
1.1
where c
1
a n d pj-
p r o b a b i l i t y o f f a i l u r e . T h e r e l i a b i l i t y o f t h e s t r u c t u r e i s d e f i n e d a s l-pf.
is the
is the
T h e failure
Several
=argmin{
C[
c (x) + cf(x)pf(x)
0
| f(x)<0
I f(x)<0,
(x)<p
Pf
where c
f f u t u r e
(1.1)
(1.2)
f f u t u r e
u l(i + u),
is the future cost, / is the real interest rate (excluding inflation), and U is the rate of
arg mm
Co( ) + ^ k( )Pk( )
x
| W^O,
p (x)<p
k
(1.3)
denotes the
failure and the probability of failure of the k failure mode, respectively. A n example of a
th
structural constraint is / - d -d ,
0
x*=argmin{
c ( x ) | f(x)<0 }
(1.4)
(1.5)
These problems are frequently addressed in engineering practice because they avoid the
need for assessing the failure cost. In fact, E q . (1.4) denotes the well-known deterministic
(non-RBDO) design optimization problem for which uncertainty is not accounted.
Relative to E q . (1.4), the problem in Eq. (1.5) introduces a safety constraint for which the
reliability analysis is required. This is, conceptually, the simplest R B D O problem.
The second category of R B D O problems is identified as
x*=argmin{
x*=argmin{
x*=argmin{ p (x)
f
| f (x) < 0 }
p (x)
f
(1.6)
(1.7)
(1.8)
where c is the prescribed upper bound o f the cost. A n extended set o f problems is
formulated by replacing
p (x)
f
with
max p ( x ) ;
k
namely,
probability over all failure modes. The problems in Eqs. (1.6) to (1.8) then turn into minmax type problems.
The problem in E q . (1.6) seeks to maximize the reliability given structural constraints.
While this is sometimes referred to as the inverse reliability problem in the literature, we
reserve this term for a problem introduced below. In Eqs. (1.7) and (1.8) the initial cost
and the total expected cost are introduced as constraints. Hence, these two equations are
counterparts to Eqs. (1.5) and (1.2), respectively. However, although Eqs. (1.7) and (1.8)
represent the "flipped" version of Eqs. (1.5) and (1.2), they are not equivalent problems.
That is, the optimal design achieved by addressing Eqs. (1.8) and (1.2) is generally
different.
The third category of R B D O problems contains what is referred to as inverse reliability
problems (Der Kiureghian et al., 1994; L i & Foschi, 1998). Here, the discrepancy
x * = a r g m i n { \p (x)-p \
| f (x) < 0 }
(1.9)
| f(x)<0, c (x)<c
0
(1.10)
<c }
(1.11)
with
m a x ^ (x) - p |, where p (x) is the failure probability o f failure mode k. Ideally, the
k
value o f the objective function o f Eqs. (1.9) to (1.11) at the design point is zero. This
would imply that target reliability is achieved and that constraints are satisfied. However,
it may not be possible to achieve the reliability
l-P/
constraints. In fact, Eqs. (1.9) to (1.11) are related to the problems in Eqs. (1.6) to (1.8),
which seek maximization o f reliability rather than convergence to a target reliability. For
instance, E q . (1.6) results in design variable values that minimize failure probability,
while E q . (1.9) potentially results in design variable values that provide a less safe
design, but that complies with the prescribed reliability 1 - pj-.
In this thesis, the first category o f R B D O problems is considered. This choice is
founded on our belief that the principles o f rational decision-making should form the
basis for R B D O . Moreover, we address the problem for a single failure event in E q . (1.2).
Ideally, the problem in E q . (1.1) should be addressed. However, the difficulties in
obtaining the "true" cost o f failure make this problem less practical. The problem in E q .
(1.3) is not considered in this thesis, because only one failure cost is assumed.
1.2
Solution Algorithms
In this thesis it is assumed that the problem in E q . (1.2) is defined in terms o f a finite
element model. Specifically, the design variables x and the random variables v are
specified in terms of the input parameters of the finite element model. In this situation, a
number of challenges are present when attempting to solve Eq. (1.2):
Challenge 1. The failure probability p (x)
f
methods that are coupled with the finite element analysis. This type of analysis is
termed the finite element reliability analysis and may be challenging in itself. A
number of reliability methods exists, all approximate. The choice of method
influences the choice of optimization algorithm and its behaviour. The failure
probability is a nonlinear function of x, regardless of whether the limit-state
function is linear. Moreover, the failure probability may not be continuously
differentiable.
Challenge 2. The structural response may be nonlinear, which is the case under
consideration in this thesis. The nonlinearities of the structural response and the
failure probability cause the objective function and constraint functions to be
nonlinear as well.
Challenge 3. The objective function, constraint functions, and the limit-state
function are implicit functions expressed by structural responses from the finite
element analysis.
Challenge 4. The most effective algorithms to solve Eq. (1.2) are gradient-based.
That is, they require the gradient of the objective function and constraint functions
with respect to x to be computed accurately and efficiently. Unless a
reformulation technique is employed, the gradient of the failure probability and
possibly thefiniteelement response must be computed. The gradient computation
may be both analytically problematic and computationally costly. Additionally,
inaccuracies in the gradients lead to convergence problems in the optimization
analysis.
Challenge 5. In this thesis, we typically consider problems including 10-100 design
variables and 10-500 random variables. It is imperative that the solution
functions themselves are unnecessary. Hence, the response surface method may be
termed quasi
gradient-free.
10
nested bi-level.)
12
Standard optimization algorithms without any links to an external code are used
since the inner reliability level is eliminated.
The explicit transformation between the original space and the standard normal
space of random variables is required in the mono-level approach.
.
The mono-level approach is only applicable for the component reliability problem
and separable series systems, which do not have correlation between different
failure modes.
A "direct" mono-level approach was developed by Chen et al. (1997) and generalized
by Wang and Kodiyalam (2002) and Agarwal et al. (2003). In this approach design
variables are defined as the mean values of some random variables. Furthermore, FORM
reliability analysis is employed. Under the assumption of uncorrelated random variables,
a direct relationship is established between the design variables and the approximation
point in FORM analysis (this will later be termed
for
The transformation between the original space and the standard normal space of
random variables does not have to be explicit.
The direct mono-level approach can only deal with mutually independent random
variables. A further study of correlated random variables is required.
reliability and optimization problems. This is different from the mono-level approach
where FORM reliability analysis is implicit. The decoupled bi-level approach has the
following advantages and disadvantages:
1. The failure probability is computed using any available reliability methods since
the reliability analysis is decoupled from the optimization analysis.
2. Additional computational effort is required to determine the optimal design for
highly nonlinear functions in RBDO problems.
3. The decoupled bi-level approach has the ability to couple with finite element
analysis.
4. The gradient discontinuity problem may cause non-convergence or slow
convergence.
14
5. The decoupled approach is more efficient than the nested bi-level approach
because the number of reliability evaluations is significantly reduced. Therefore,
this approach is applicable to high-dimensional problems.
It is easy to code and to combine the reliability analysis with any optimization
software without having to reformulate the problem.
The true local optimal solution cannot be guaranteed because the failure
probability is always computed for a previous design.
2002, & 2004a) further develop the methodology to solve Eq. (1.2). In this approach, Eq.
(1.2) is reformulated as a semi-infinite optimization problem (Polak, 1997). This
reformulated problem has proven to be identical to the original problem, when FORM
analysis is used to compute the failure probability. Moreover, a heuristic scheme is
implemented to improve the reliability estimate. The term semi-infinite comes from the
fixed number of design variables and the infinite number of reliability constraints in the
reformulated problem. This approach has the following advantages and disadvantages:
1. The failure probability is computed using any available reliability methods since
the reliability analysis is completely decoupled from the optimization analysis.
2. Additional computational costs are required to determine the optimal design for
the highly nonlinear functions in RBDO problems.
3. The decoupled sequential approach has the ability to couple with the finite
element analysis.
4. The gradient discontinuity problem may cause non-convergence or slow
convergence.
15
5. This approach is more efficient than the nested bi-level approach. However, this
approach requires an infinite number of reliability constraints to achieve the
"true" optimal solution. This is not attainable in practice. Usually, the user stops
the optimization at a predefined accuracy and obtains an approximate solution.
Therefore, this approach is applicable to high-dimensional problems.
The method of outer approximations has proofs of convergence (Kirjner-Neto et
al., 1998). This implies that there is a convergence proof for the decoupled
sequential approach when the limit-sate function is linear in the space of random
variables.
In this thesis we implement the decoupled sequential approach in OpenSees and apply
it to structures that exhibit nonlinear behaviour. This approach is termed the decoupled
sequential approach by the method of outer approximations (DSA-MOOA). We also
Eq. (1.2). Failure probabilities are here computed using Monte Carlo or importance
sampling. Thefirst-orderderivative of the failure probability with respect to design
variables is obtained analytically and computed using Monte Carlo or importance
sampling. The original RBDO problem is reformulated as an inequality constraint
optimization problem and solved using standard optimization algorithms. The number of
samples increases as the design approaches the optimal design point. Royset and Polak
16
(2004b) prove that the optimization algorithm converges with the optimal design when
the number of samples approaches infinity. This approach has the following advantages
and disadvantages:
1. Failure probabilities are computed using Monte Carlo sampling and importance
sampling.
2. Additional computational costs are required to determine the optimal design for
the highly nonlinear functions in RBDO problems.
3. This approach has the ability to couple with the finite element analysis.
4. The gradient discontinuity problem may cause non-convergence or slow
convergence.
5. The computational cost is higher than in the DSA-MOOA approach since the
sample average approximations use Monte Carlo sampling or importance
sampling to compute the values and gradients of failure probabilities. The number
of sampling points increases as the design nears the optimal stage. Therefore, this
approach is still applicable to high-dimensional problems.
The sample average approximations have proofs of convergence even if the limitsate function is nonlinear in the space of random variables.
1.3
Thesis Organization
Following the introduction, the fundamentals of finite element reliability analysis and
optimization theory are introduced in Chapters 2 and 3, respectively. Chapter 2 reviews
the concept of finite element reliability and describes FORM, the second-order reliability
method, Monte Carlo sampling, and importance sampling. Chapter 3 presents the
inequality constraint optimization problem and the semi-infinite optimization problem, as
well as their correspondingfirst-ordernecessary optimality conditions. The Polak-He
algorithm and the method of outer approximation algorithm are also described in this
chapter.
17
Chapter 4 describes the finite element software, OpenSees, which is extended with
reliability analysis capability. The element, section, and material objects used in this
thesis are emphasized in this chapter. The reliability analysis module adds the
ReliabilityDomain
defines the random variables and their correlation. It also maps the values of the random
variables into the finite element model. The latter includes eight analysis types and
several analysis tools.
Chapter 5 introduces the sensitivity analysis capability in OpenSees. Two main
response sensitivity methods, the finite difference method and the direct differentiation
method, are two analysis tools in OpenSees. The direct differentiation method is stressed
in the chapter by briefly describing its equation derivation and several implementation
issues. Chapter 5 ends with a section on the continuity of response gradients. The
potential negative effects of discontinuous structural responses are observed and
remedied using two methods: the smooth material model and the section discrimination
scheme.
Chapter 6 presents the problem reformulation and algorithms of the DSA-MOOA and
DSA-S approaches. The reformulated optimization problem is identical to the original
RBDO problem when the limit-state function is linear in the space of random variables.
The applications of the method of outer approximation and the Polak-He algorithms are
described in detail in this chapter. The optimization capability of OpenSees is extended
through the addition of several objects defining all functions involved in the optimization
problem. Moreover, two analysis types (DSA-MOOA analysis and DSA-S analysis) and
several analysis tools are added.
Chapter 7 presents a numerical example involving a nonlinear finite element analysis
of a three-bay, six-storey building to demonstrate the new implementations in OpenSees.
Three cases are studied: a linear pushover analysis using elasticBeam elements, a
nonlinear pushover analysis using beam WithHinges elements, and a nonlinear pushover
analysis using dispBeamColumn elements with fibre sections. The case studies focus on
convergence performance and computational time. This chapter also presents practical
18
19
Chapter 2
Since the introduction of limit-state design in the early 1980s, reliability methods have
been masked by partial safety coefficients in prescriptive design code requirements.
Direct usage of reliability methods has only been observed in special projects, such as
offshore structures and nuclear power plants. This paradigm is changing with the
introduction of performance-based engineering. Over the past several decades, analytical
and numerical models have drastically improved the engineers' ability to predict
structural performance. However, such predictions can only be made in a probabilistic
sense. Unavoidable uncertainties are present in model parameters and in the numerical
model itself. Reliability analysis and probabilistic methods are therefore rapidly
becoming required tools in engineering practice. This chapter presents reliability analysis
in conjunction with nonlinear finite element models to make probabilistic predictions of
structural response. Such analysis is required to solve the reliability-based design
optimization (RBDO) problem in Eq. (1.2).
The name finite
element reliability
20
(2.1)
W e observe that when the response d exceeds the threshold d, the limit-state function
becomes negative, as required by the syntax rules. It is emphasized that response
quantities d may represent displacements, stresses, strains, etc.
The reliability problem described by a single limit-state function is termed the
component reliability problem. If failure is prescribed by the joint state o f several limitstate functions, the reliability problem is referred to as a system reliability
problem.
In the component reliability problem, the probability o f failure p (x) is defined by the
f
integration o f the joint probability density function (PDF) / ( v ) o f the random vector v
over the failure domain in the space of random variables:
p (x)=
f
jjfWdv
(2.2)
g(d(x,v))<0
The failure probability depends on the design variables x. In addition, we note that the
failure probability does not change i f the limit-state function is arbitrarily scaled using a
finite positive number. This is important for later developments.
Analytical solutions to E q . (2.2) are generally not available, and approximate methods
are employed to evaluate the failure probability. In these reliability methods, it is
common to transform the problem into the standard normal space. That is, the original
random variables v are transformed into a vector u o f uncorrelated standard normal
random variables. The Nataf model (Liu & Der Kiureghian, 1986) is an example o f such
a transformation. T is denoted as the transformation for a given design vector x and
x
replace the random vector v by T~ (u). W e then obtain the equivalent limit-state function
x
_l
Hence, the reliability problem is re-defined in the standard normal space as:
p (x)=
f
J]Wu)rfu
(2.3)
g(d(x,u))<0
For system reliability problems with limit-state functions g , ke {l,2,...,K], the failure
k
system, failure occurs when any components (limit-state functions) fail. That is, the
failure domain is defined by the union of failure domains of components:
Ug*(d(x,u))<0
(2.4)
r|g*(d(x,u))<0
(2.5)
For a general system, the definition of the failure domain involves both union and
interaction operations.
2.1
(2.6)
u* is the point in the failure domain with the highest probability density and is therefore
termed the most probable
point
constraint g(d(x,u)) < 0 in place of the equality constraint. This is acceptable, as long as
the origin is in the safe domain. This is the case for most practical problems, where the
failure probability is much less than 0.5.
Searching for the MPP in Eq. (2.6) is an optimization problem in itself. This
optimization process requires the first-order derivative of the limit-state function to be
continuous in the standard normal space u. One effective algorithm for searching for the
MPP is the iHLRF-algorithm (Hasofer & Lind, 1974) (Rackwitz & Fiessler, 1978)
22
(Zhang & Der Kiureghian, 1997), which is gradient-based and employs a line search
scheme. In addition, Eq. (2.6) with an inequality constraint formulation can be solved
using the Polak-He algorithm (Polak, 1997) or using standard nonlinear optimization
algorithms such as NLPQL (Schittkowski, 1985) or NPSOL (Gill et al., 1998). The latter
algorithms, however, are not specialized for the present case of one constraint.
Initialize variables u, x
Transform u-> v
Evaluate g(d(x,v))
Reliability
Analysis
Module
Evaluate
dd/dv
dg _ dg dd dv
du dd dv du
Finite
Element
Analysis
Module
Take a step
= u' + step size x search direction
.Convergence check.
JYeT
Post processing
p,(x)3>(Figure 2.1 MPP searching algorithm in finite element reliability analysis
Haukaas and Der Kiureghian (2004) employ the iHLRF algorithm and the Polak-He
algorithm to find the MPP for finite element reliability problems. The outline of the
search algorithm is shown in Figure 2.1. The search algorithm requires a transformation
between the original v-space and the standard normal u-space. The value of limit-state
function g and the gradient dgldu are evaluated in this algorithm. They are used in
finding a search direction and a step size. The figure also shows the interaction between
the search algorithm and the finite element code. The finite element analysis module is
23
repeatedly updated with new realizations of the random variables v. In return, the finite
element analysis module produces the response d and the response gradients dd/b\. The
need for response derivatives is due to the need for the gradient of the limit-state function
in the standard normal space. The chain rule of differentiation yields:
dg_ dg_ddd^_
du dd dv du
=
(2.8)
p (x)*<b(-fi(x))
(2.9)
^(^^^(Ax^no + AxK/x))-"
(2.10)
7=1
where Kj(x),j-\,...,m-\
24
2.2
u/v, which are statistically independent standard normal random variables. The indicator
function /(u,.) corresponding to each simulated point is established using the following
rule: /(u,) = 1 whenever g(d(x,u)) < 0, and 7(u.) = 0 otherwise. Monte Carlo sampling
(
P/W-TFZAu,)
(2.11)
The quality of the solution of Monte Carlo sampling is measured using the coefficient
of variation (c.o.v.) of the probability estimate: c.o.v. = ^(1- p (x))/(N- p (x)). Monte
f
uw with the probability density h(u). All simulated points are statistically
independent standard normal random variables. h(u) is a joint PDF and nonzero in the
failure domain. Importance sampling gives an approximation of component failure
probability p (x) using:
f
, l I
25
^ A
(2.12)
where <p(-) is the standard normal PDF (Ditlevsen & Madsen, 1996). The coefficient of
variation of failure probability in importance sampling is defined by:
(2.13)
c.o.v.
Importance sampling is efficient and requires fewer simulations than Monte Carlo
sampling since the sampling distribution is centered on the MPP, where failure
realizations are frequently encountered.
2.3
dp,
ox
ox
=^[i -
ox
a
ox op
ox
(2.i4)
1 dg
|V g| dx
(2.15)
where V g is the by-product of FORM analysis and dg/dx = (dg / dd\dd / dx) is readily
u
obtained by utilizing response sensitivities from the finite element code (Hohenbichler &
Rackwitz, 1986) (Bjerager & Krenk, 1989). However, the derivative of the failure
probability in Eq. (2.14) cannot be proven to be continuous. This is because in a
26
reliability problem, the MPP may jump to a different location on the limit-state surface
g=0 due to an infinitesimal perturbation of the design variables. This jump leads to a kink
on the function p (x) in the x-space. In effect, the gradient of the failure probability in
f
are replaced by the joint standard normal cumulative distribution function. Then, the
reformulated failure probability is differentiated. The sensitivity of the failure probability
is expressed through the joint standard normal PDF and evaluated using Monte Carlo or
importance sampling.
It is important to note that not all gradient-based algorithms that address the problem in
Eq. (1.2) require the gradient dp Idx. In fact, as will be shown in subsequent chapters,
f
the algorithms implemented in this thesis circumvent the problem of actually computing
this gradient by using an augmented design variable to take place of the failure
probability.
27
Chapter 3
This thesis implements two decoupled sequential approaches to solve the reliability-based
design optimization problem expressed in Eq. (1.2). These approaches profit from the
advantages of both mono-level and bi-level approaches that were discussed in Chapter 1.
The optimization problem in Eq. (1.2) is reformulated to enable the use of the method of
outer approximation (MOOA) for semi-infinite inequality optimization problems, or the
use of the Polak-He algorithm for ordinary inequality constraint optimization problems
(Polak, 1997). This chapter defines the fundamental concepts of the optimization theory,
which forms the basis for the subsequent problem reformulation and the corresponding
solution algorithms.
3.1
F(x)
\ f(x)<0
(3.1)
andf[x)
is the maximum
3.1.1
(3.2)
A candidate solution to any optimization problem must satisfy the optimality conditions
of the problem. These are generally
necessary
sufficient
to
guarantee that the optimal point has been found. Moreover, optimality conditions can
28
only indicate whether a local minimum has been found. This is a fundamental problem
with any optimization algorithm; only through engineering insight and repeated searches
from different starting points, among other strategies, can we confidently state that the
global optimal point has been found. The problem is schematically illustrated in Figure
3.1, where it is shown that an objective function may have a local as well as a global
minimum. It is easy to imagine that the search algorithm may "get stuck" at the local
minimum, without realizing that another point is the actual solution to the optimization
problem. A repeated search with a new start point may reveal the global solution.
Objective function
Local minimum
Global minimum
Design variable
Figure 3.1 Local and global optimal points
Optimality conditions are not only convergence criteria for the optimization algorithm.
They are often used to construct search algorithms to solve the optimization problem.
This further motivates the following exposure of optimality conditions for different
optimization problems.
For pedagogical purposes, first consider a deterministic optimization problem without
constraints:
x* = argmin{ F(x)}
(3.3)
(3.4)
where J[x) is the constraint. Note that a problem with the equality constraint f\x)=0 may
be reformulated into an inequality constrained problem with two constraints, f(x) < 0
and - /(x) < 0. In other words, we introduce two inequality constraints for every
equality constraint.
Contours of the
objective function
X2
=0
/>0
(3.5)
at the design point; (2) the constraint is active at the design point, in which case the
gradient of the objective function is proportional to the gradient of the constraint at the
design point. Figure 3.2, in which the minimization of an objective function with two
design variables is considered, clarifies this concept. The contours of the objective
30
function are shown as circles, while the constraint l i m i t / = 0 is shown as a line. The solid
arrows in the figure depict the gradients of objective functions and constraint functions at
certain points. A t the design point x* gradient vectors VF
and Vf
although they point in different directions. This orthogonality at the design point between
the gradient of the objective function and the gradient of the constraint is written as
(3.6)
VF = - Vf
Mo
Mi
The two optimality conditions in Eqs. (3.5) and (3.6) are combined into one equation
by first defining an auxiliary function termed the "Lagrange function:"
Z(x) = / i F ( x ) + / i / ( x )
0
(3.7)
where ju and //, are denoted Lagrange multipliers. The method of Lagrange multipliers
0
(3.8)
both of the above cases
(active and inactive constraint) are included. First, considering the case where the
constraint is active at the design point, E q . (3.8) can be turned into E q . (3.6) by l e t t i n g / =
0 and //, > 0 . Second, considering the case where the constraint is inactive, E q . (3.8) can
be turned into E q . (3.5) by setting ju\ = 0. In conclusion, the optimality conditions for the
problem in E q . (3.4) read
V I = 0 and / / , / = 0
where / / , / = 0 implies that either p
f<0,
>0,
Mo
and
(3.9)
A>0.
Turning to the case of multiple inequality constraints, the optimization problem reads
31
(3.10)
where f(x) is the vector of constraints. In this case, one positive Lagrange multiplier is
introduced for each constraint. Consequently, the Lagrange function reads
L(x) = ju F(x) + //, /,(x) + fi Mx)
0
(3.11)
\i > 0
(3.12)
(3.13)
and that all involved functions F and f are continuously differentiable (Polak, 1997).
Polak-He Algorithm
Parameters. Select parameters 0<a<l, 0</?<l, 0<S, 0<y.
Input Data. Input the initial design XQ.
32
(see below).
Step 3. Update the design x
/ + 1
= x. + Ah
(
Lagrange
multipliers
and
\i, subject
to
the
linear
constraint
solutions are ju* and u.*. Then, the search direction vector h, is computed using
0
(3.15)
The Armijo rule used in Step 2 is a line search scheme, which applies a merit function
in the step-size selection. Ideally, the step size is determined at the minimum merit
function since this leads to an optimal rate o f convergence. The merit function in this case
is
M(x ,x
i
+ B\)-f(xX}
(3.16)
where s is an integer to be introduced shortly. The step-size selection must meet the
requirement o f M(x ,x
i
+ B h )<aB 0 .
s
starting from the initial value 0. The parameter s increases or decreases by unit steps until
an acceptable and maximum step size is found. Finally, the appropriate step size X is set
t
equal to the maximum value o f B and corresponding to the minimum merit function:
s
33
(3.17)
+1
function M(x .,x ) and/or the value of sub-optimization #(x.) in Eq. (3.14) is
(
(+1
3.2
This section describes thefirst-orderoptimality conditions and the MOOA algorithm for
solving the semi-infinite optimization problem of the form
x*=argmin{ y (x) | ^(x)<0 }
0
(3.18)
where y/ (x) is the objective function, and y/(x) is the maximum value of the n0
dimensional constraints (Polak, 1997). The objective and constraint functions y/ are
defined by
^(x) = max y/j(x)
34
(3.19)
(3.20)
where the function <f>. (x, y) is determined by the design vector x and the extra argument
y , which are all points in the domain Y . , i.e., y e Y . . The design vector x is finitedimensional, but the number o f functions ^ (-,y) is infinity because o f the infinite
7
number o f points y in the domain Y . . That is the reason for the term "semi-infinite." A n
example o f function ^ . ( x , y ) is the negative value o f limit-state function - g ( x , u ) , in
which x is the design vector, u is the random vector (namely point y ) , and domain Y is
y
the standard normal space. A s mentioned in Chapter 2, a positive outcome o f the limitstate function ( g > 0) is defined as safe; hence the constraints - g ( x , u ) < 0 ensure a safe
structure.
3.2.1
and
7=0
f y/ (x) = 0
jMj
(3.21)
7=0
where x* denotes the optimal design, and Lagrange multipliers / / , , / / are positive0
+ ju H
2
35
(y/j(x*) = 0), while the corresponding Lagrange multiplier must be larger than zero
(jUj > 0 ) . This situation is illustrated in Figure 3.2, where the constraint/= 0 is active at
the design point x*, and the objective function reaches minimum in the mean time.
3.2.2
sub-domains Y
j<N
jN
are sequentially
(3.22)
(3.23)
j<N
Similar to the Polak-He algorithm, the approximation to the merit function in the
Armijo rule is denoted in the following:
M (x',x") =m a x { ^ ( x " ) - ^ ( x ) - ^ ( x ' ) , ^ ( x " ) - ^ ( x ' ) }
,
= max{0,y/ (x)}.
N
36
(3.24)
search direction h
e (x) = mmM (x,x + h)
N
(3.25)
0N
7V
fi (x,x
+ h)-y/ (x) }
N
(3.26)
(3.27)
^ x ^
(3.28)
^ . ( x , x + h) = m a x j ^ x , y ) + ( V
7=0
tolerances
= r -OA/N
<7
Step 0. Set N = N .
0
j<N
in the domain Y . .
jN
<N
37
Step 1 employs the Polak-He algorithm to solve the following inequality constrained
optimization problem:
YJ,N =argmin {^,(x,y) }
The solution to Eq. (3.30) is a point y
(3.30)
JN
In Step 2, all points y , (j = \,,, k = l,---,N) from Step 1 are collected. If the
jk
</>j(x ,y )
k
jk
JJl
Ntk
constraint
N+t
(- )
31
which satisfies
0* (x )>-r
(J.32)
^ .(x )<r
(3.33)
+1
^+,(0
+1
+ l
(e.g., 10")
6
is reached. In this thesis, when two adjacent solutions are very close (||x - x . | | < s), the
/+l
N+l
results in a gradually more accurate solution. The optimal solution x* is reached when N
equals infinity. A t the design point x * , the first-order optimality conditions in E q . (3.21)
are satisfied; namely, at least one constraint is active at that point, and the value of the
objective function reaches its minimum.
39
Chapter 4
The OpenSees software framework (McKenna et al., 2004) serves as the computational
platform for research within the Pacific Earthquake Engineering Research (PEER) Center
and is rapidly gaining users internationally. Its source code, documentation, and
executable files are freely available on the web site
http://opensees.berkeley.edu.
OpenSees was originally designed to compute the response of nonlinear structural and
geotechnical systems using finite element techniques. Haukaas and Der Kiureghian
(2004) extended OpenSees with reliability and response sensitivity capabilities for
nonlinear finite element analysis. This chapter introduces nonlinear finite element
analyses and reliability analyses in OpenSees. The response sensitivity analysis is
discussed separately in Chapter 5.
The object-oriented programming approach was employed in the development of
OpenSees. The introduction of object-oriented programming has brought with it a
revolution in software development (Deitel & Deitel, 1998). This revolution is based on
the notion of standardized, interchangeable software components. These components are
called objects or, abstractly, classes. Objects are instantiated at run-time based on
specifications made by the developer in the corresponding classes. Each class, and hence
object, may contain member functions and member data. Detailed specification of the
member functions and data members is found in class interfaces. Class interfaces contain
key information necessary to understand an object-oriented software framework. Class
interfaces also facilitate the transparent nature of object-oriented programming. Their
structure is common to all object-oriented software. Armed with the knowledge of
universal syntax rules of the programming language such as C++, a user is able to
understand the software architecture of a specific object-oriented framework. Such
software design has extensibility and maintainability as its integral feature. The
programming language C++ is employed in this thesis for the purpose of object-oriented
programming.
40
4.1
This section introduces the OpenSees software framework for the nonlinear finite
element analysis as detailed in the
(Mazzoni et
al., 2005). Element, section, and material objects used in this thesis are emphasized, as is
the fundamental knowledge needed for the case studies in Chapter 7.
OpenSees consists of a set of modules, which create a finite element model, specify an
analysis procedure, analyze the model, and output the results. A complete finite element
analysis involves four main types of objects, as shown in Figure 4.1. The
ModelBuilder
object establishes the finite element model by defining the nodes, elements, loads, and
constraints. The
Analysis
nonlinear analyses. The structural response, such as the displacement history at a node or
the entire state of the model at each load step, is recorded and output by the
object. The Domain object stores the model created by the
ModelBuilder
Recorder
object and
ModelBuilder>
Domain
<(
Analysis
M o v e s the model from
time t + dt
Recorder
Monitors u s e r d e i n e d
parameters in the model
during the analysis
Figure 4.2 Element, section and material relationship (Mazzoni et al. 2005)
One type of element objects is the elastic beam-column element,
elasticBeamColumn,
beamWithHinges
and
dispBeamColumn,
nonlinearBeamColumn.
The
interior and two plastic hinges at the each ends. The parameters to construct this element
are pre-defined sections at two ends, ratios of the hinge length to the total element length,
cross-sectional area A , Young's Modulus E, and the second moment of area I. The
parameters A , E, and / are used for the elastic interior, which has linear-elastic properties.
Two plastic hinges represent the inelastic regions, in which forces and deformations are
sampled at the hinge midpoints using mid-point integration. A
nonlinearBeamColumn
element spreads the distributed plasticity along the element and follows the force
formulation. A dispBeamColumn element is a displacement-based beam-column element,
which has distributed plasticity with linear curvature distribution. To describe these two
elements, pre-defined sections and a number of integration points along the element are
42
required. The integration along the element is based on the Gauss-Lobatto quadrature
rule, with two integration points at the element's ends.
A section object defines the stress resultant force-deformation response at a cross
section of a beam-column or of a plate element. There are three types of sections: elastic
section, defined by material and geometric constants; resultant section, which is a general
nonlinear description of force-deformation response (e.g. moment-curvature); and fibre
section, which is discretized into smaller regions for which the material stress-strain
response is integrated to produce resultant behaviour (e.g. reinforced concrete) (Mazzoni
et al., 2005). A fibre section has a general geometric configuration formed by sub-regions
of simpler, regular shapes (e.g. quadrilateral, circular, and triangular regions) called
patches. In addition, layers of reinforcement bars can be specified. The subcommands
patch and layer are used to define the discretization of the section into fibres. Individual
fibres, however, can also be defined using the fibre object. During generation, the fibre
objects are associated with material objects, which enforce Bernoulli beam assumptions
(Mazzoni et al., 2005). Two examples of fibre sections are shown in Figure 4.3 to
describe a circular section and a rectangular section.
43
There are several material objects in OpenSees. This thesis uses the
uniaxialMaterial
Plastic
(EPP)
= 0, the Elastic-Perfectly
Plastic
material can be used to simulate concrete material, as shown in Figure 4.4. The steel
material,
SteelOl,
kinematic hardening and optional isotropic hardening (Mazzoni et al., 2005). This
material needs the following parameters: yield strength F , initial elastic tangent E, and
y
stress or
force
FyV-
FyP
strain or
deform.
strain or
deform.
FyN
EPP
SteelOl
A-Fy
Figure 4.4 Elastic-perfectly plastic material (FyP =0) and SteelOl material
4.2
Reliability Analysis
This section focuses on the implementation of the reliability analysis based on the work
of Haukaas and Der Kiureghian (2004). A ReliabilityDomain
Domain
44
ReliabilityDomain
randomVariable
probabilityTransformation
runFORMAnalysis
correlation
gFunEvaluator
randomVariablePositioner
gradGEvaluator
parameterPositioner
i
performanceFunction
I
modulatingFunction
searchDirection
filter
meritFunctionCheck
spectrum
T
reliabilityConvergencCheck
runSORMAnalysis
i
runSamplingAnalysis
i
runOutCrossingAnalysi
i
runFOSMAnalysis
i
runSystemAnalysis
i
runGFunVizAnalysis
i
runFragilityAnalysis
stepSizeRule
rootFinding
startPoint
1
findDesignPoint
i
randomNumberGenerator
ii
i
i
\
\
"Analysis Types'
findCurvatures
1
"Analysis Tools"
Figure 4.5 Software framework for reliability analysis in OpenSees
(Triangle symbol denotes the relationship between base class and
subclasses, while the diamond symbol denotes analysis tools)
Three categories of classes are present in Figure 4.5. First, the ReliabilityDomain
contains model data. A randomVariable object creates random variables in several ways,
by given the random variable type, the mean, the standard deviation, etc. A correlation
object
specifies
the
correlation
coefficient
45
between
random
variables. A
randomVariablePositioner
random process.
The next category in Figure 4.5 is the "analysis tools," used for reliability analysis in
OpenSees. This framework of analysis tools makes use of the concept of object-oriented
programming, which allows the "base classes" to promise feature that is implemented by
the "sub-classes." In this manner, several implementations are made available for each of
the analysis tools. For instance, a number of sub-classes are available to evaluate the
limit-state function, and OpenSees only executes the sub-class that is specified by the
user. This illustrates the extensibility feature of OpenSees: new algorithms to perform
various analysis tasks are implemented without modifying the software framework.
A probabilityTransformation object transfers random variables between the original
space and the standard normal space. The Nataf model is applied in the current
implementation. A gFunEvaluator object computes the value of limit-state functions for a
given realization of the random variables. A gradGEvaluator object evaluates the
gradients of limit-state functions with respect to the random variables. Currently, two
alternatives are available: the finite difference method and the direct differentiation
method. Both are described in Chapter 5. A serachDirection object computes the search
direction vector when finding the most probable point (MPP) in the algorithm. Current
implementations include the iHLRF algorithm, the Polak-He algorithm, the sequential
quadratic programming algorithm, and the gradient projection algorithm. A stepSizeRule
object obtains an appropriate step size along a search direction using the line search
scheme. A simple algorithm uses the fixed step size throughout the search, and the
alternative algorithm employs the Armijo line search algorithm. A rootFinding object is
46
used by the gradient projection search algorithm to visualize the limit-state surface. A
meritFunctionCheck object checks the value of merit function and determines the
suitability of a step size. A reliabilityConvergenceCheck object checks the convergence
when searching for the MPP. One criterion determines the closeness of the MPP to the
limit-state surface; another criterion determines how closely the gradient vector points
towards the origin in the standard normal space. A startPoint object provides the starting
point when searching for the MPP; it can also serve as the centre of the sampling density
in an importance sampling analysis. Usually, the analysis starts from the mean of the
random variables, the origin of the standard normal space, or user-defined values. A
findDesignPoint object searches for the MPP using a step-by-step search scheme. The
search direction is determined by the serachDirection object, and the trial point is
determined by computing the step size along this search direction using a line search
scheme. A randomNumberGenerator object is used in the sampling analysis. The
standard library function in the programming language C++ is used in this object. A
findCurvature object is required in the second-order reliability analysis. It finds the
curvatures of the limit-state surface at the MPP.
The third category in Figure 4.5 shows eight analysis types in the reliability module of
OpenSees. The users are required to specify some necessary analysis tools before
executing these analysis commands. OpenSees prints corresponding information to a file
or a computer monitor, thereby allowing the users to monitor the reliability analysis
process. Following a successful analysis, OpenSees outputs the results into a userspecified file. In this thesis, thefirst-orderreliability analysis (runFORMAnalysis) and
the importance sampling analysis (runSamplingAnalysis) are employed in case studies in
Chapter 7.
47
Chapter 5
5.1
The FDM consists of perturbing the values of model parameters, re-evaluating structural
responses, and finally obtaining a finite difference estimate of the gradient vector. A
typical equation of the FDM is
dF(G) _ F(0 + Ad) - F{6 - AO)
86
2A0
A simplified FDM is the forward finite difference method, which is similar to Eq.
(5.1), but requires only one additional execution of the response function for each
parameter. It has the form:
dF(O)F(0
80
&e)-F(e)
A0
Eqs. (5.1) and (5.2) indicate that the FDM is not efficient because of additional
executions of the response function for each derivative. In addition, the accuracy of the
gradient depends on the selection of the perturbation factor of each parameter, which is a
challenging task.
5.2
P"'
P '
EX
at a pseudo-time
t.
n
(5-3)
= p r
where F' "' is a function of the displacement vector u, and the subscript denotes a
n
obtain:
T
3u
" 80
ap;*' a p ; ,
(
80
80
'"
f,xed
where K = 8P' "' I8\x is the algorithmically consistent stiffness matrix, and
n
49
8u 180
n
8u /80
n
is
is
solved after every convergence of the equilibrium Eq. (5.3). The computation of du 180
n
The derivative of internal force for the fixed current displacement appears in Eq. (5.4)
and is assembled over all elements based on the strain-displacement matrix B, the stress
vector a, and the element stiffness matrix k.
dPf |
QQ
_ M f
~ J\
KJ
lufixed
lu fixed
R
+
a^,
r .
D
r . j^.
lu fixed "
r r >
n n
K
~
\e
fixed J
UX
W - - V
where U denotes assembly over all elements, and Q. denotes the domain of each
el
element. The differentiation of the element integration is also required in the assembly.
When parameter 0 represents material properties, the derivative of the internal force is
assembled from derivatives of stress at each material point for the fixed current strain.
Then, Eq. (5.5) can be simplified as follows:
^ i ^ = u k : ^ A - d *
(5.6)
Two important issues are considered in the implementation of the DDM. First, the
response sensitivity 8u /80 must be solved at each load increment for inelastic
n
materials since sensitivity computations require history variables and their derivatives to
be computed and stored at each increment. In each sensitivity computation, the material
routine is called twice because the derivative of the stress 8a/80\
flxed
in Eq. (5.6) is
conditioned upon fixed strain at the current increment only. The first call obtains the
derivative of the stress conditioned upon fixed stain, and the second call computes and
stores unconditional derivative of the history variables when displacement and strain
sensitivities are obtained.
Second, 8a I80 \
E fixed
8V /80\
nl
flxed
generally not zero. The strain sensitivity 8s 180 in the finite element domain is not zero,
and thus da I'80\
z fixed
is also not zero at all material points after the first load step
50
because deldO enters the derivatives of the history variables. For further details, see
Haukaas and Der Kiureghian (2004).
5.3
The FDM and the DDM are implemented in OpenSees based on the object-oriented
software architecture. Figure 5.1 shows the framework of response sensitivity in
OpenSees. The
finiteDifferenceGradGEvaluator
utilizes the
GfunEvaluator
object to
compute the value of limit-state functions for perturbed parameters and then to calculate
the response gradient using the FDM. The OpenSeesGradGEvaluator
sensitivity
algorithm
and the
sensitivity
integrator.
The
sensitivity
algorithm:
the
computeAtEachStep
gradGEvaluator
finiteDifferenceGradGEvaluator
OpenSeesGradGEvaluator
GFunEvaluator
Sensitivity Integratoi
dynamic static
Sensitivity Algorithm
:omputeAtEachStep computeByCommand
option is used in all dynamic analyses and all inelastic analyses, while the
computeByCommand
integrator
assembles the right-hand side of Eq. (5.4) for each parameter for either static analysis or
dynamic
analysis.
matrix. The
sensitivity
algorithm
sensitivities are desired, and performs the following operations: First, parameter 9 in the
finite element domain is "activated" to obtain correct contributions from element, section,
material, node, and load objects. Second, the
sensitivity integrator
hand side of Eq. (5.4) and collects contributions from the objects of the finite element
domain. Next, Eq. (5.4) is solved to obtain the displacement sensitivity du/89, and the
results are stored in the node objects. Finally, all material objects are called by the
sensitivity
integrator
element and section objects. Stain sensitivities are computed based on the displacement
sensitivity by using ordinary kinematic equations.
As part of the finite element reliability analysis, the reliability module maps random
variables and design variables to the finite element module and receives structural
responses and response sensitivities from the finite element module. The mapping
procedure updates the value of model parameters in the finite element model each time
new random variables and design variables are available. It also identifies parameters to
ensure that correct contributions to response sensitivity computations are assembled.
Three member functions are used in the classes containing desired model parameters: a
member function identifying the parameters, a member function updating the value of the
parameters, and a member function
activating
computations.
Two objects are involved in the mapping procedure: the
object (as part of the reliability analysis) and an object
52
(e.g.,
randomVariablePositioner
finite element domain. The randomVariablePositioner object makes use o f the object
from the finite element domain as its data member. The detail mapping procedure is
described as follows. First, the setParameter method in the finite element object creates a
link between relevant random variables and the parameter in the finite element object.
Then, when the reliability analysis updates the random variables i n the finite element
model,
the update
method
of
the
randomVariablePositioner
object
and the
updateParameter method o f the finite element object are called upon update the values o f
the model parameters using new random variables from the previously created link.
5.4
In the nonlinear finite element analysis it is common to employ material models with
sudden transitions from elastic to plastic response. A s discussed' in Haukaas and Der
Kiureghian (2004), this may lead to discontinuities in response sensitivities. This also has
an adverse effect on the convergence to the most probable point in reliability analysis. In
this thesis we emphasize the potential negative effect o f the optimization analysis. In fact,
the effect o f gradient discontinuities due to sudden transitions from elastic to plastic
response is dramatically detrimental to the performance o f the optimization algorithm.
This is because (1) the proof o f convergence o f the optimization algorithms requires
continuously differentiable limit-state functions; (2) the discontinuities in the gradient
may cause the algorithm to stall; and (3) the abrupt changes in the gradient may cause i l l conditioning (even though it is theoretically acceptable) and hence slow convergence to a
solution. This leads to the conclusion that the issue o f gradient discontinuities is even
more important in the R B D O analysis than in the search for the most probable point in
the stand-alone reliability analysis. It is stressed that the nonconvergence or slow
convergence problems are expected since the assumption o f continuous differentiability
is violated. In fact, all standard nonlinear programming algorithms w i l l experience
difficulties when applied to such inappropriate problems.
53
5.4.1
Figure 5.2 Bi-linear steel material model smoothed with circular segment
54
circular segment coincides with those of the elastic and plastic ranges at intersection
points. The smoothed stress-stain curve and its tangent stiffness are thus continuous. The
circular segment starts from y-Fy in the stress-strain curve, where the parameter is
0 < y < 1. Figure 5.2 illustrates how the bi-linear steel material model is smoothed with a
circular segment. Haukaas and Der
does not change significantly as a result of smoothing. They recommend the selection of
parameter y > 0.8 to avoid results that differ significantly from those obtained with the
bi-linear model.
It is demonstrated in Haukaas and Der Kiureghian (2004) that the smooth material
model leads to continuity in response sensitivities. To compute response sensitivities
using the DDM
DDM
equations for the smooth material model and implement them in OpenSees. They also
present several examples to show that the smooth material model successfully avoids the
response sensitivity discontinuity problem in the reliability analysis.
5.4.2
Section Discretization
S c h e m e
The section discretization scheme discretizes the element cross-section into smaller
regions or fibres. The uniaxial material response for each fibre is integrated to produce
approximately smooth behaviour. The use of this section discretization scheme makes the
nonlinear structural response "approximately continuously differentiable" to meet the
requirement of the RBDO algorithms.
The section discretization scheme takes advantage of the fibre section object in
OpenSees. The fibre section is ideal for defining a reinforced concrete section: 1-2 top
and bottom fibres of the concrete cover using normal strength concrete, 10-20 side fibres
of the concrete cover using normal strength concrete, 10-20 fibres of the concrete core
using higher strength confined concrete, and several layers of reinforced bars. Examples
of suchfibresection are shown in Figure 4.3.
55
56
Chapter 6
Optimization
6.1
Problem Reformulation
The RBDO problem in Eq. (1.2) minimizes the initial design cost plus the expected cost
of failure subject to reliability and structural constraints. Let x be the vector of design
variables. Then, this problem takes the form (see Chapter 1)
x*=argmin{ c (x) + c (x)p (x)
0
(6.1)
where c (x) is the initial cost of the design, c (x) is the present cost of future failure,
0
p (x) denotes the probability of failure for one failure event, f (x) are deterministic
f
The solution algorithm for Eq. (6.1) requires the functions involved, c (x), c (x), and
0
f (x), to be continuous, and the constraint set f (x) to be closed and bounded. Since the
failure probability p (x) is involved in both the objective and constraint functions, the
f
failure probability is also required to be continuous. Royset et al. (2002) have proven that
this is the case in realistic design problems.
57
The problem in Eq. (6.1) is computationally difficult since the failure probability,
which depends on the design variables, is defined in terms of a high-dimensional integral
over the domain of random variables. Royset et al. (2002) replace the failure probability
p (x) with parameter a, which is updated during the optimization analysis to develop a
f
(6.2)
Royset et al. (2002) have proven that Eqs. (6.1) and (6.2) have identical global optimal
solutions when some assumptions are satisfied.
Because the gradient of the true failure probability is unavailable, it is problematic that
the failure probability still appears among the constraints. This is addressed by making
use of concepts from the first-order reliability method (FORM). As outlined in Chapter 2,
the FORM estimate of the failure probability is p/= 0(-/?), where the reliability index 6 is
the minimum distance from the origin to the limit-state surface g=0. To this end, the
equality constraint p (x) = a in Eq. (6.2) is replaced by constraint y/ - 0, where ip is
f
(6.3)
where
^(x) = - min {g(x, u)}
ueB(0,r)
58
(6.4)
reformulation is motivated by the desire to cast the optimization problem in a semiinfinite form, thereby allowing it to be solved using the method of outer approximations
(DSA-MOOA). This is because the constraint y/ = 0 in fact represents an infinite number
of constraints, one for each point within the hyper-sphere. In this thesis we also
demonstrate that a simplified approach (DSA-S) can be used to solve the problem by only
including one constraint to enforce y/ = 0.
If a FORM approximation of the failure probability is acceptable, then the
reformulation of the constraint p (x) = a in terms of the function y/ is correct should
f
- 1
prescribed in terms of response quantities from a nonlinear finite element analysis, then
the limit-state function is nonlinear. To account for such nonlinearity, a correction factor /
is introduced: r = - O
- 1
(6.5)
Royset et al. (2002) have proven that replacing the equality constraint in Eq. (6.3) by an
inequality constraint does not alter the solution. This proof assumes that the failure cost is
positive, and that the origin in the standard normal space is in the safe domain. The
The solution algorithm requires that the solution domain of Eq. (6.3) remainfixed.This is obviously not
the case because r varies in the optimization process. In the computer implementation, this problem is
solved by applying the u = ru transformation, where u is the solution domain that remains a ball of unit
radius, namely u e B(0,1).
59
former assumption is trivially satisfied, and the latter one is generally satisfied due to the
high reliability of structures.
Eq. (6.5) is the final reformulation of the original problem. If the limit-state function is
linear or i f the F O R M approximation of the failure probability is acceptable, then t is set
at 1 and the reformulated problem in Eq. (6.5) has the same solution as the original
problem in Eq. (6.1). This is proven by Royset et al. (2002). Moreover, the method of
outer approximation (MOOA) algorithm to solve the semi-infinite problem in Eq. (6.5)
has convergence proofs (Polak, 1997). Hence, we are guaranteed to find a converged
solution for our final approximate problem in Eq. (6.5).
However, the first-order approximation of the reliability problem may be a poor
assumption when the limit-state function is highly nonlinear. That is, in nonlinear finite
element reliability problems, the parameter a from the first-order approximation does not
equal the probability of failure p (x)
f
example, importance sampling). To be able to deal with these cases, we solve the final
approximate problem through updating the value of the parameter t. The parameter t
starts from unity and is updated during the optimization analysis to account for the
nonlinearity in the limit-state function. In this manner, approximate solutions are
obtained with increasing accuracy as the algorithm proceeds. Specifically, the parameter /
is updated by multiplying with the correction factor 0~ (a)/0~ (p (x)).
l
The philosophy
behind this update is that i f p (x)>a, then the constraint y/ < 0 in the final approximate
f
problem allows the limit-state surface {u | g(x,u) = 0} to come too close to the origin in
the u-space, thus requiring the radius of the ball associated with y/ to be increased. The
increase of the ball radius is obtained by increasing t. If p^ (x)<a, then the limit-state
surface is required to be too far away from the origin in the u-space by the constraint
y/ < 0, and the size of the ball must be reduced (i.e., t is reduced).
60
6.2
DSA-MOOA Approach
In this thesis we employ solution algorithms that are termed "decoupled" and
"sequential." The justification for the decoupled characterization is that any reliability
method can be used to obtain a more precise estimate of the failure probability than the
FORM analysis. The justification for the sequential characterization is that the
optimization analysis in Eq. (6.5) and the reliability analysis in Eq. (6.4) are solved
repeatedly and in sequence to address the bi-level problem in Eq. (6.1). The reliability
constraint is updated for each optimization analysis in Eq. (6.5). It is stressed that the
decoupled approach allows flexibility in the choice of optimization algorithm and
reliability computation method.
In this section we present the implementation of the DSA-MOOA approach developed
by Kirjner-Neto et al. (1998) and Royset et al. (2002). It makes use of the problem
formulation in Eq. (6.5). Figure 6.1 presents a flow chart of the DSA-MOOA algorithm,
which consists of iterations at several levels. Upon the initialization of design x and
parameters a and t, the top level iteration includes three tasks:
FORM.
61
y/ < 0
UN,
by N constraints
o'
Pf
p, \,
i = 0, j = 0 t =l
0
a =p ,
0
x =(x ,a )
min{g(x,u)|||u|| -[--\a)tf
2
^ (x) = -g(x,-<D-'(a)fu )
y
>
B2 - Constraints expansion
if y/j(x)> p
-p
1 VOO
B3 - Outer approximation
min j? (x) + c (x)alf(x) < 0,y/(\,a) < 0,0 < a < p \
(x,a)
7=7 + 1
Yes
X/+i
i+i
ta ,p(x )
M
A3 - Update parameter /
-1,))
t =tp~\a )/0- (pAx
l
Optimal design
Figure 6.1 Flow chart of DSA-MOOA approach
62
He algorithm. In this thesis, the negative value of the limit-state function at the vector u j
is denoted as y/. Terminate the Polak-He algorithm when tolerance - a
Here, o
= 0.1/ N
is a user-defined series, a
is satisfied.
where p is a user-
defined parameter usually set at 0.5. In this manner, the number of constraints
represented by y/ < 0 evolves during these iterations. We have observed that these
constraints are simply a collection of points Uj for which the limit-state function is
required to stay positive in task B3.
B3. Outer approximation: Solve the constrained optimization problem in Eq. (6.5)
using the Polak-He algorithm. The number of constraints in this problem is equal to the
number of structural constraints, plus the N constraints added by the previous item and
the single constraint a<p .
f
According to proofs presented by Polak (1997) for the MOO A algorithm, an "exact"
solution is found if the discretization number N approaches infinity.
Tasks B l , B2, and B3 are repeated until the optimality conditions are satisfied
according to a user-defined precision tolerance. Typically, 75 to 150 iterations are
required to find the optimal solution. These tasks are described in further detail in
subsections 6.2.1-6.2.3, in which we focus on the connections between the particular
problems discussed in this chapter and the general algorithms discussed in Chapter 3.
One important advantage of DSA-MOOA approach is reiterated, namely that the
reliability and optimization calculations are decoupled, thus allowing flexibility in the
choice of optimization algorithm in task A l and reliability computation method in task
A2. In addition to the MOO A algorithm, Polak (1997) provides a pre-defined
discretization scheme to solve the semi-infinite problem in task A l . Similarly, the user is
63
difficulties caused by the potential difference in orders o f magnitude between a and the
other design variables x. F o r this reason, in our implementation the parameter
6 = -0~'(a)
substitute for the reliability index B, in the same way that a is a substitute for the failure
probability p (\).
6.2.1
B l - Inner Approximation
Given design x and parameters a and t, task B l solves the following reliability problem:
(6.6)
u*,
or
u,
(for /
discretization
point).
The corresponding
constraint
y/j - -g(x, rUj) is the minimum of the limit-state function within a ball o f radius r.
The Polak-He algorithm described in Chapter 3 is used to solve E q . (6.6), an inequality
constraint optimization problem with a single constraint. The fact that there is only a
single constraint simplifies the optimization process. The values and gradients of
functions Fand/in
E q . (3.1) are
64
VF(x, u) = Vg(x, u)
(6.7)
V/,(x,u) = 2u
(6.8)
The first step searches for the direction vector h, by solving a quadratic optimization
problem #,.(x) in Eq. (3.14), subject to linear constraint
=1. By setting
and substituting Eqs. (6.7) and (6.8) into Eq. (3.14), this sub-optimization
problem can be simplified as
0,(x) = - mm{]f (x, u,) -
(6.9)
^[Vg(x,u,.) + /i [2u,.-Vg(x,u,.)]|
Eq. (6.9) can be solved by V0 /V//, = 0, and has the following solution:
!
<
rf=
"
-[2u,-Vg(x,u,)]
( 6
1 Q
0,
Mi
=1
Mi
Figure 6.2 n solutions for inner approximation using the Polak-He algorithm
x
whenever the right-hand side of Eq. (6.10) has a value in [0,1]. Otherwise, the solution of
Eq. (6.9) is either //* =0 or p.* = 1, whichever yields the lowest value for the objective
65
function i n E q . (6.9). Three types o f solutions are illustrated i n Figure 6.2. Finally, the
search direction is
h,. = -i[a-A/1')-Vg(x,ul) + 2 ^ u ]
(6.11)
The second step finds an appropriate step size X along the search direction h . using
t
+1
is replaced by u
/ + 1
infinity, so a
starts from the larger tolerance 0.1 and goes to 0 as Af increases. Since
high accuracy is only needed when approaching the design point, a high tolerance in the
beginning o f task B1 is acceptable and can save computational time.
6.2.2
B2 - Constraints Expansion
Task B 2 first collects the u and y/j(x) from the inner approximation (task B l ) into a
y
matrix. For N discretization points, we have an JV-column matrix, i n which each column
contains random variables u and y/j (x) o f the form
y
u,
u,
_^i(x) ifj(x)
(6.12)
y/ (x)_
N
Second, task B 2 assembles the reliability constraints set y/(x), which includes the
constraint y/ (x) at the origin ( u = 0 ) in the whole assembling procedure. The reason to
0
at the origin
is to make sure
^ ( x ) = - g ( x , 0) < 0 , i.e.,
0
g(x, 0) > 0 , which is the requirement o f problem reformulation in subsection 6.1. Task
B 2 updates reliability constraints by accumulatively storing solutions u
66
when y/j(x)
exceeds p - p .
j
implementation. p - p
J
Finally, the reliability constraint set y/(x) has the following form:
y/(x) =
u =0
u,
^ (x)<0
0
y/ (x)>p -p
i
x / {x)>p -p"
J
iy (x)>0
(6.13)
In this manner, the number of constraints represented by ^(x) evolves with the
increase in the number of discretization points. These constraints are simply a collection
failure domain
g<0,
safe domain \
g>0,
V j
<0
X
X
Vj>0
of points u for which the limit-state function is negative in task B2 but is required to
y
stay positive in task B3. Hence, the result of task B2 is an expanded constraints set ip(x),
which enters task B3 for outer approximation. From the above description, we know that
the number of columns of matrix ^(x) is less than or equal to N+l. Figure 6.3 illustrates
the procedure used to assemble reliability constraints set y(x) by collecting all of the
points in the failure domain in the standard normal space.
6.2.3
B3 - Outer Approximation
reliability constraints y/(x) < 0, and an extra constraint a< p , task B3 solves Eq. (6.5)
f
and updates design x and parameter a. This task is also solved using the Polak-He
algorithm. As opposed to task Bl, the number of constraints in task B3 is greater than
one. In fact, the number of constraints increases during the analysis. According to Polak
(1997), in the Polak-He algorithm a quadratic sub-optimization problem with linear
constraints must be solved to obtain the search direction h . This is currently addressed
(
by linking the quadratic programming software LSSOL (Gill et al., 1986) to OpenSees.
For afixednumber (q=m+n+\) of constraints, Eq. (6.5) is an inequality-constrained
problem. If the Polak-He algorithm is applied to task B3, functions F and / i n Eq. (3.1)
have the following form:
Obj etive function
Reliability constraints
Extra constraint
m+l
(x) = \p (x), f
m+n+x
m+2
(x) = xp (x), , f
x
m+n
(6.14)
(x) = y _ (x)
n
(x) = a - p
problem
68
0, = -mm{Morm)+
7=IU
dependent
q]
p + p + + p =1. Eq. (6.15) cannot be solved in the same way as Eq. (6.9) in the
0
minimize F(u) = g \i + b - G u |
(6.16)
subject to
- |
f -
?+l
the lower and upper bounds for all the variables and general constraints, respectively:
L = [0, 0, -,l]
r
+1
0< <l,
j = 0,\,-,q
Mj
(6.17)
Mo +M + + M
l
69
/(!, ) -/ (x,.)
g'
G =
dx,
ax,.
5x,.
da,
5a,
(6.18)
dFCxJ
da,
LSSOL can solve Eq. (6.15) in a finite number of iterations and find the solution to
u* =[/u* , //,
0
,//* ]
(6.19)
7=1
The second stepfindsan appropriate step size X along the search direction h,. using
t
the Armijo rule. Then, a new design is found by x,. = x. + X.h. and used as input data
+1
N+l
N+l
0<^ (x )<o-
+l
+l
(6.20)
(6.21)
with 0 (-) and W (-) defined in Eqs. (3.25) and (3.23), respectively. The definition of
+1
N+l
<J =0.\/N
is the same as for the inner approximation, which starts from larger
tolerance and goes to 0 as N increases. This is reasonable here since the accuracy of the
semi-infinite optimization algorithm gradually increases with the increase in the
discretization number and since high accuracy is only required when approaching the
design point.
In task B3 we need to evaluate three functions and their gradients with respect to the
augmented design variable vector x: the objective function, structural constraint
functions, and reliability constraint functions.
70
df(x)
dx
dx
dy/(x) _ dy/(x) dd
dx
dd
dx
(6.22)
d[c (x) + c (x)a]
8f(x)
da
da
dy/(x) _ dy/(x) dd
da
dd
da
where y/ is the negative value of the limit-state function g, dy/ldd is easily found
because g is a simple algebraic expression in terms of d, and dd/dx are response
gradients. Again, the existing FDM or DDM implementations in OpenSees are used to
obtain the required response gradients.
In conclusion, the MOOA algorithm solves a series of inequality-constrained problems
in tasks BI and B3. As the discretization number N increases, the MOOA algorithm
results in a gradually more accurate solution x. The optimal solution x* is reached when
/V equals infinity. At the optimal point, the value of the objective function reaches its
minimum, and the first-order optimality conditions in Eq. (3.21) are satisfied. Therefore,
one of reliability constraints is active and equal to zero at the optimal point. This means
that the reliability analysis finds the most probable failure point (MPP)
at the optimal
6.3
DSA-S Approach
last iteration. However, the DSA-S approach is still attractive because a consistent design
is obtained at a considerably lower computational cost. The computational time of
discretization in the DAS-MOOA approach is saved since there is only one reliability
constraint. Another advantage of the DSA-S approach is the decoupling of the reliability
and optimization calculations. It is flexible in selecting optimization algorithms and
reliability computation methods.
X
Pf
0>
= 0, j = 0, f = l
0
^ 1 '
^2
x =(x ,a )
a =p ,
0
x,a
<o}
^(x) = - g ( x , - 0 ( a ) / u )
_l
u, ^(x)
Yes
X
/+l'
i+l
t a ,p {x )
n
C 3 - Update parameter t
t =t,*-\a )/<l>- (j> (x ))
l
i - i +1
No
Yes
Figure 6.4 Flow chart of DSA-S approach
72
Optimal design
Figure 6.4 presents aflowchart of the DSA-S approach, which consists of iterations at
several levels. Upon the initialization of design x and parameters a and t, the top level
iteration includes three tasks:
CI. Update design vector x and auxiliary variable a.
C2. Compute failure probability p (x)
FORM.
C3. Update parameter t.
Tasks C2 and C3 are same as tasks A2 and A3 in the DSA-MOOA approach. The top
level iteration is repeated until design vector x, auxiliary design variable a, and parameter
t stabilize. Usually, 5 to 15 iterations (CI to C3) are needed to reach a consistent
reliability based design.
Task CI updates values of x and a in two steps: deterministic optimization analysis and
reliability constraint update. These are described in tasks Dl and D2, respectively. In the
current implementation, the Polak-He algorithm is employed to solve these two tasks.
D l . Deterministic optimization analysis: The inequality constrained optimization
problem in Eq. (6.5) is solved using the Polak-He algorithm. The constraints in this
problem include several structural constraints, one reliability constraint ipCx) < 0, and
During the first iteration, the random variables are set equal to their mean values, and
parameter a is set as p .
f
software LSSOL
(Gill
et
al.,
1986) or Matlab
OPTM
toolbox
73
the Polak-He algorithm. Then, the reliability constraint is updated with the new random
vector u. Terminate the Polak-He algorithm when the user-defined tolerance s is
satisfied. Task D2 is similar to task BI in the DSA-MOOA approach with fixed high
tolerance.
T a s k s D l a n d D 2 are repeated until both design variables a n d r a n d o m variables are
consistent.
In other
u -u .||<
/+1
and/or
words,
i>i ,
E
the calculations
where
are terminated
when
is the
6.4
an optimal solution.
platform
f o r this p u r p o s e . T h i s i s m a i n l y
d u e to the object-oriented
a r c h i t e c t u r e that t h r o u g h o u t t h e e v o l u t i o n o f O p e n S e e s h a s k e p t
focus o n
five
work
costFunction,
and
(runDSA-MOOAAnalysis
ReliabilityAnalysis
adds
and
ReliabiltyDomain,
runDSA-SAnalysis)
were
constraintFunction
objects to the
14
ReliabilityDomain.
designVariable
for an objectiveFunction
object.
ReliabilityDomain
designVariable
costFunction
constraintFunction
designVariablePositioner
new
"analysis
tools"
are
also
implemented:
NonlinSinglelneqOpt,
NonlinMultilneqOpt, and LinMultilneqOpt. These are so-called base classes that promise
features but do not contain actual implementations. Any number of sub-classes may be
implemented to perform the promised features. This illustrates the extensibility feature of
OpenSees: new algorithms to perform various analysis tasks can be implemented without
having to modify the software framework. The base class NonlinSinglelneqOpt promises
to solve tasks B l and D2. The sub-class implemented for this base class is named
PolakHeNonlinSinglelneqOpt.
solve
tasks
B3
and
Dl.
The
current
subclass
implementation
is
optimization problem with linear constraints is solved to find the search direction. This is
fulfilled by the base class LinMultilneqOpt. Currently, subclass LSSOLLinMultilneqOpt
is available in the implementation of OpenSees.
ReliabilityAnalysis
runDSA-SAnalysis
NonlinSinglelneqOpt
NonlinMultilneqOpt
PolakHeNonlinSinglelneqOpt
PolakHeNonlinMultilneqOpt
evaluateFun evaluateGradFun
LSSOLLinMultilneqOpt
Figure 6.6 Software framework for optimization analysis in OpenSees
(Triangle symbol denotes the relationship between base class and
subclasses, while the diamond symbol denotes analysis tools)
The category of "analysis tools" also contains classes such as evaluateFun and
evaluateGradFun. The evaluateFun object evaluates the values of objective functions,
76
cost functions, and constraint functions. The evaluateGradFun object evaluates the
gradients of objective functions, cost functions, and constraint functions.
Optimization module
evaluateGradFun
evaluateFun
[c (x) + c (x)a],fj(x)
0
dx
' dx
NonlinMultilneq ^^
Opt
Pf>f:
dy/
~dx~
u,x
Reliability
module
runDSA-MOOAAnalysis /
runDSA-SAnalysis
Pf
u,x
. dd dd
d,,
Finite element
. module
du dx
element module, and the reliability module i n OpenSees. G i v e n design vector x and
random vector u, the finite element module can compute response d and response
sensitivities dd/du
limit-state function
value g , and gradients dg I du with respect to random variables and dg I dx with respect
to design variables. Function
78
Chapter 7
elasticBeam
beamWithHinges
element of
OpenSees; and (3) a nonlinear model, where the elements are modelled using the
nonlinearBeamColumn
response of the building is assessed using the static "pushover" analysis. The limit-state
function is specified in terms of the total drift of the structure. The objective is to
minimize the total expected cost of the
constraints. Moreover, this study compares the convergence performance and the
computation time for the algorithms and structural models presented herein. In particular,
convergence problems may occur in the algorithms that address the inner and outer
approximation problems described in the previous chapter. For example, a scaling of the
involved functions is required when the Polak-He algorithm is used to address these
problems. We also observe that the computational time is increased when redundant
(inactive) constraints are added. Such experience from the hands-on optimization analysis
is valuable for users of the developed software and is reported below.
7.1
F r a m e
central 6m corridor bay) in the East-West (EW) direction. The interior columns are all
500 x500mm, while the exterior columns are all 450 x450mm. The beams of both NS and
EW frames are 400mm wide x 600mm deep for the first three storeys and 400 x550mm
for the top three storeys. Concrete with mean strength f = 30MPa is used throughout,
c
and the reinforcement has mean yield strength f - 400MPa. The Canadian Concrete
y
Design Handbook (1995) specifies that the frame is designed as a ductile moment
resisting frame with R = 4.0, where R is the ductility force modification factor that
reflects the capacity of a structure to dissipate energy through inelastic behaviour.
H6
H5
o
H4
P3
P6
P6
P3
P2
P5
P5
P2
P2
P5
P5
P2
PI
P4
P4
PI
PI
P4
P4
PI
PI
P4
P4
PI
OO
O
>/->
H3
H2 J
HI
o
co
9,000
6,000
9,000
This thesis aims to optimize the design of the columns and beams of this ductile
moment resisting frame. For this purpose, we consider linear and nonlinear pushover
analyses of the second EW frame. The finite element model and the applied loads are
illustrated in Figure 7.1.
The load case of "l.Oxdead load + 0.5xlive load + l.Oxearthquake load" is considered
in the analysis. We consider dead loads and live loads as deterministic. The lateral loads
from ground motion have lognormal distribution. Their means and coefficients of
variation are shown in Table 7.1.
Table 7.1 Vertical loads and lateral loads (c.o.v. indicates the coefficient of variation)
Loads
H
H
Pi
Pi
Pi
P4
Ps
P6
7.1.1
Mean
28490 kN
48950 kN
70070 kN
89100 kN
109780 kN
131890 kN
108000 kN
105000 kN
96000 kN
184000 kN
178000 kN
182000 kN
c.o.v.
0.15
0.15
0.15
0.15
0.15
0.15
N/A
N/A
N/A
N/A
N/A
N/A
Type
lognormal
lognormal
lognormal
lognormal
lognormal
lognormal
N/A
N/A
N/A
N/A
N/A
N/A
Description
random lateral load on floor 1
random lateral load on floor 2
random lateral load on floor 3
random lateral load on floor 4
random lateral load on floor 5
random lateral load on roof
deterministic vertical load
deterministic vertical load
deterministic vertical load
deterministic vertical load
deterministic vertical load
deterministic vertical load
"equal displacement principle" shown i n Figure 7.2 is employed to compute the total
inelastic displacement d , subject to the lateral seismic force V. The solid line in the
e
figure denotes the inelastic response. The corresponding linear system, signified by the
dashed line, applies equivalent lateral seismic force V = VxR and results in the same
e
displacement d .
e
dJR
the linear
case,
12 design
x = (b ,h ,b ,h ,b^,h ,b ,h ,b ,h ,b ,h ),
x
variables
are collected
in
the vector
variables describing the loading and material properties are collected i n the vector
\ = (H ,H ,H ,H ,H ,H , ),
i
42
equivalent lateral loads from the first storey to the roof, respectively. E\ to 4 2 represent
the modulus of elasticity of the concrete material for all 42 elements. W e assume that all
random variables are lognormally distributed with the means and coefficients of variation
listed i n Table 7.3. Random variables H\ to H$ are correlated with the correlation
coefficient o f 0.7, while random variables E\ to 4 2 are correlated with the correlation
coefficient of 0.7.
82
Table 7.2 Definition and initial values of design variables for Cases 1 and 2
Variable
Initial Value
0.45x0.45m
Jl2
0.45x0.45m
6 x hi
0.50x0.50m
/l
0.50x0.50m
bX
2
64 X
bs x /z5
be* he
0.40x0.60m
0.40x0.55m
Description
width and depth of exterior columns offirstthree stories
width and depth of exterior columns of top three stories
width and depth of interior columns offirstthree stories
width and depth of interior columns of top three stories
width and depth offirstthree stories' beams
width and depth of top three stories' beams
Table 7.3 Statistics of random variables in Case 1 (c.o.v. indicates the coefficient of
variation, and cc. indicates the correlation coefficient)
Variable
Mean
4x28490 kN
0.15
4x48950 kN
0.15
#3
4x70070 kN
0.15
4x89100 kN
0.15
4x109780 kN
0.15
He
4x131890 kN
0.15
E\ ~ E42
24648 MPa
0.15
c.o.v. cc.
0.7
0.7
Type
lognormal
lognormal
lognormal
lognormal
lognormal
lognormal
lognormal
Description
equivalent lateral load on floor 1
equivalent lateral load on floor 2
equivalent lateral load on floor 3
equivalent lateral load on floor 4
equivalent lateral load on floor 5
equivalent lateral load on roof
modulus of elasticity of concrete
The reliability problem for the frame is defined in terms of the limit-state function
g(d(x,v)) = 23.1x0.02-rf
roof
(7.1)
where 23.1m is the height of the frame, 0.02 is the maximum limit of the drift ratio, and
d
T00{
In this thesis, our objective is to achieve a frame design that minimizes the total
expected cost, given specific constraints. For this purpose, we model the initial cost of
design and the cost of failure in terms of the total volume of the members. The cost of
83
failure is assumed to be five times the volume of the members. This leads to the
following objective function:
where L, represent the total length of the members in each of the six categories identified
in the design vector, while hi and hi are cross-sectional dimensions. The reliability
constraint is prescribed as p (x) < 0.00135, which implies a minimum reliability index
f
of 3.0. The structural constraints are prescribed to be 0 < 6 hi and 0.5 < 6,//z, < 2 to
ensure positive dimensions and appropriate aspect ratios, where / = [1, 2, 3, 4, 5, 6].
A stand-alone finite element reliability analysis using elasticBeam elements was
performed. For initial values of the design variables in Table 7.2 and mean realizations of
the random variables in Table 7.3, the lateral displacement at the roof level was 238mm.
The corresponding drift ratio was 238/23100 = 1.03%, which was less than the limit of
2%. Afirst-orderreliability analysis (FORM) resulted in a reliability index 8= 3.646 and
r
the prescribed reliability constraint. The total expected cost of the initial design in terms
of volume was 54.062m .
3
The first optimization analysis was performed using the DSA-MOOA approach. This
approach starts from the semi-infinite optimization analysis (task Al), which iteratively
updates the constraints represented by ip and obtains improved designs for / = 1. These
were identified earlier as tasks B l to B3. In this case, the limit-state function was linear,
since a linear relationship exists between the random variables and the response quantity
d f. Convergence was achieved within 1 to 3 iterations for task B l , and within 1 to 10
roo
iterations for task B3. After discretizing the ball using the method of outer approximation
(MOOA) algorithm by 75 pointsafter 75 loops of task B l to B3the algorithm
repeatedly produced the same design. This was taken to indicate convergence. At the
optimal design there were five reliability constraints. The tolerance of this solution to the
"true" converged solution was cr =0.1/ 75 =1.78xl0" . The total expected cost was
2
84
reduced from 54.062m to 38.701m , while the failure probability was 0.00135, which
3
satisfied the reliability constraint. Then, an importance sampling analysis based on the
new design variables was performed in task A2 to get the "real" failure probability with a
2% coefficient of variation of the sampling result. The results were 38.71 lm for the total
3
cost and 0.00140 for the failure probability. This difference was acceptable, and the
RBDO was stopped after one top level of iteration (tasks A l and A2). This was due to the
linear nature of the problem.
The second optimization analysis was performed using the DSA-S approach. The
algorithm started with task CI, which sequentially completed a deterministic
optimization analysis and then found a new reliability constraint for t = 1. These were
identified earlier as tasks Dl and D2. Similar to the DSA-MOOA approach, the
convergences within tasks Dl and D2 was achieved quicklywithin 1 to 30 iterations for
task Dl and within 1 to 6 iterations for task D2since the limit-state function was linear.
We used the same tolerance as with the DSA-MOOA to judge whether convergence was
achieved (i.e., s = 0.1/75 =1.78xl0" ). The optimal design was achieved after three
2
loops of tasks Dl and D2. The DSA-S approach and the DSA-MOOA approach gave the
same design: the total expected cost was 38.701m and the failure probability was
3
0.00135. In the next task, C2, we got the same solution as in task A2 in the DSA-MOOA
approach using importance sampling with a 2% coefficient of variation of the sampling
result. We accepted this as the optimal design and terminated the analysis.
Table 7.4a Results from RBDO analysis for Case 1
t
1.000
bx
h\
bi
hi
bi
ht,
1.000
hs
bs
be
he
+c a
c +c p
54.062
38.701
54.064
38.711
0.4
0.55 0.000133 0.000141
0.261 0.522 0.00135 0.00140
85
Tables 7.4a and 7.4b show the results obtained from the two implemented approaches.
The presented results include the value of the 12 design variables, the auxiliary parameter
a, the failure probability p
the values of the initial design, while the second rows show values of the optimal design.
The two approaches produced the same solution, although the D S A - M O O A approach
guaranteed convergence with a first order approximation, while the D S A - S approach did
not.
0.00
0.08
0.16
0.24
0.32
0.40
0.48
R o o f Drift (m)
Figure 7.3 Structural responses for Case 1 (load factor versus roof displacement)
at: (1) the mean point of the initial design; (2) the M P P of the initial design; (3)
the mean point of the optimal design; and (4) the M P P of the optimal design.
86
Figure 7.3 shows the structural response of four characteristic realizations of design
variables and random variables. The response at the mean realization of the random
variables for the initial (original) design is shown as the thin solid line. As expected, this
response is linear. At the most probable failure point (MPP) of the initial design, the
structural response is still linear, but reaches the 2% of drift limit 0.462m, as prescribed
by the limit-state function. The structural response at the mean realization of random
variables for the optimal design is shown as the thick solid line. The response is linear,
but has a larger displacement than in the initial design, which is consistent with the
failure probability of the structure increasing from 0.000133 to 0.00135. Finally, the
structural response at the MPP of the optimal design is linear and reaches 2% of the drift
limit. The optimal design has an acceptable reliability, but a lower total expected cost.
This serves as an indication of the usefulness of the RBDO approach.
DSA-S
DDM
147
FDM
1227
with
dg/du
80
N/A
N/A
with
dg/dx
347
N/A
58
N/A '
Importance Sampling
112486
112607
with the computations of dg/dx. Thus, altogether 982 limit-state function calls are
required for the DDM method. On the other hand, the FDM method computes the
gradient using one extra limit-state function evaluation for each random variable and
design variable. Thus, the total number of limit-state function calls required for the FDM
method is 555, as well as 80 times the number of random variables (48) and 347 times the
number of design variables (12). Finally, 8,559 limit-state function calls are required,
which is much more than needed for the DDM method. The two approaches require a
similar number of simulations to compute the failure probability using importance
sampling, shown in the last row of Table 7.5.
The DSA-S approach, which requires 213 (147+8+58) limit-state function calls,
appears to be more efficient than the DSA-MOOA approach, which requires 982 limitstate function calls for the DDM method. The key reason for this is that only one
reliability constraint is maintained in the DSA-S approach, while the DSA-MOOA
approach expands reliability constraints step by step when discretizing the ball with more
points.
7.1.2
In this section we perform a nonlinear pushover analysis by using the beam WithHinges
element of OpenSees. We consider the plasticity to be concentrated at over 10% of the
element length at the element ends. The elastic properties are integrated over the beam
plastic hinge
linear elastic
plastic hinge
-O
0 right node
left node 0o
0.1Z
0.81
0.12
interior, which is considered to be linear elastic. Forces and deformations o f the inelastic
region are sampled at the hinge midpoints. A bi-linear or smooth uniaxial material is used
in the plastic hinge region to model the moment-rotation
relationship. A typical
42
properties. W e assume that all random variables are lognormally distributed with the
means and coefficients o f variation listed in Table 7.6. Random variables H\ to H$ are
correlated with the correlation coefficient o f 0.7, and random variables E\ to 42 are
correlated with the correlation coefficient of 0.7. The limit-state function and the
objective function are as defined in Eqs. (7.1) and (7.2). The reliability constraint and
structural constraints are as prescribed for the linear structure.
Table 7.6 Statistics o f random variables for Case 2 (c.o.v. indicates the coefficient of
variation, and c.c. indicates the correlation coefficient)
Mean
c.o.v.
H-
28490 k N
0.15
lognormal
48950 k N
0.15
lognormal
#3
70070 k N
0.15
lognormal
89100 k N
0.15
lognormal
109780 k N
0.15
lognormal
131890 k N
0.15
lognormal
11097 M P a
0.15
lognormal
E\~ E42
c.c.
0.7
0.7
Type
Description
Variable
A stand-alone finite element reliability analysis was performed. The bi-linear material
model was employed to model the plastic hinges. The stiffness o f cross-section was
evaluated by EI = Ebb? / 1 2 , where b and h were the width and the depth o f the section,
89
and the value of E was smaller than in the linear case because of considering concrete
cracking. For all columns, the yield stain e = 0.84, and the strain hardening factor
y
a -0.5. For all the beams, the yield stain s =0.52, and the strain hardening factor
y
a - 0.3. At the mean realization of the random variables in Table 7.6 and with the initial
design in Table 7.2, the lateral displacement at the roof level was 131mm. The
corresponding drift ratio was 131/23100 = 0.57%, which was less than the limit of 2%. A
reliability analysis using the FORM resulted in a reliability index /? = 3.536 and the
corresponding failure probability p (\ ) = 0.000203, which satisfied the prescribed
f
reliability constraint. The total expected cost of the initial design was 54.080m .
The first optimization analysis was performed using the DSA-MOOA approach with
the bi-linear material model. As outlined previously, the algorithm starts with the semiinfinite optimization analysis (task Al). In this case, the convergence within task B3 was
achieved for the first few iterations, namely when the number of constraints represented
by ip was low. However, the algorithm in task B3, the PolakHeNonlinMultilneqOpt,
exhibited progressively slower convergence as the number of constraints increased. In
fact, this problem made the algorithm grind to a halt. The presence of gradient
discontinuities due to sudden yielding events of the bi-linear material models was taken
to be the reason for this problem.
As a remedy to the above problem, a smoothed version of the bi-linear model
introduced in Chapter 5 was substituted. A circular segment in a normalized stress-strain
plane that started at 80% of the yield strength, y - 0.8, was employed to smooth the bilinear material as illustrated in Figure 5.2. Remarkably, the analysis proceeded without
any of the convergence problems described above. Convergence was achieved within 1 to
6 iterations for task BI, and within 1 to 65 iterations for task B3. This led us to conclude
that the presence of a non-smooth response surface due to sudden yielding events was a
serious impediment to the performance of the algorithm. Similar problems were also
observed in the stand-alone reliability analysis. However, in our experience the problem
was significantly amplified in the optimization analysis context.
90
After discretizing the ball in the MOOA algorithm by 75 points, or after 75 loops of
tasks B l to B3, the algorithm repeatedly produced the same design. At the design point,
there were 12 reliability constraints. The tolerance of this solution to the "true" converged
point was a = 0.1/75 =1.78xl0" . The total cost was reduced from 54.080m to
2
37.108m , and the failure probability was 0.00135, which satisfied the reliability
3
constraint. In the next task, A2, importance sampling based on the new design variables
was performed to get the "real" failure probability with a 2% coefficient of variation of
the sampling result. The results were 37.127m as the total cost and 0.00146 as the failure
3
probability. The difference between the two failure probabilities (0.00135 from task A l
and 0.00146 from task A2) shows the nonlinearity of the structure. In task A3, parameter
/ was updated, and the top level of the iteration (tasks A l to A3) was repeated. After two
more loops of tasks A l to A3, the differences of failure probabilities between task A l and
A2 were reduced and accepted, and the RBDO was stopped. The final total cost was
37.197m , and the failure probability was 0.00135.
3
The second optimization analysis was performed using the DSA-S approach with the
smooth material model. The algorithm started with task CI. Convergence was achieved
within 1 to 109 iterations for task D l , and within 1 to 13 iterations for task D2. We used
the same tolerance to judge the consistent design
(i.e., e = 0.\/75
= 1.78xl0" ). The
5
consistent design was achieved after four loops of tasks Dl and D2. The DSA-S approach
and the DSA-MOOA approach gave the same design: the total cost was 37.108m and the
failure probability was 0.00135. In the next task, C2, the DSA-S approach produced the
same solution as task A2 in the DSA-MOOA approach using importance sampling with a
2% coefficient of variation. As in the DSA-MOOA approach, the top level of iteration
(tasks CI to C3) was repeated for two more loops, producing consistent designs. The
optimization procedure was stopped at the total cost of 37.197m and the failure
3
probability of 0.00135.
Tables 7.7a and 7.7b show the results obtained from the two implemented approaches.
The presented results include the value of 12 design variables, the auxiliary parameter a,
the failure probability p from the importance sampling with a 2% coefficient of
91
variation, and the total expected cost corresponding to a and p . The first rows show the
values of the initial design, while the following rows show values of the optimal design.
In each of these iterations, the parameter t is updated to account for nonlinearities in the
limit-state function. After the first iteration, the value of t was updated as
1.0xO (0.00135)/O"'(0.00146) = 1.0077. The analysis was carried out for two more
_1
iterations. No appreciable difference in the design was observed. In the last row of the
table, a and p converge to the same acceptable value, 0.00135. In effect, the objective
functions have reached the minimum value: 37.197m . Hence, the design variables in the
3
1.000
1.0077
1.0089
1.0086
bx
hi
bi
/?4
hs
be
he
1.000
0.4
0.55
1.0077 0.246 0.492
1.0089 0.246 0.492
1.0086 0.246 0.492
c +c a
0.000203
0.00135
0.00135
0.00135
0.000219
0.00146
0.00137
0.00135
54.080
37.108
37.186
37.197
c +c p
0
54.085
37.127
37.189
37.197
It is observed that the two approaches obtain an improved design that is close to the
final solution already after the first top-level iteration. This can also be seen in Figures
7.5, where the total expected costs (objective function) are plotted as the function of the
iteration number. This phenomenon shows that the structure is not highly nonlinear and
that the implemented approaches are effective in dealing with the nonlinear problem.
92
2
Loops of Top Level
Figure 7.5 Evolution of the total expected cost for objective functions for
Case2
Figure 7.6 shows the structural response for four characteristic realizations of design
variables and random variables. The response at the mean realization of random variables
of the initial (original) design is shown as the thin solid line. As expected, this response is
close to linear, because no significant damage (yielding) is anticipated for this realization.
At the MPP of the initial design, however, substantial yielding occurs. This is reasonable,
since this realization represents failure. Third, the structural response at the mean
realization of random variables for the optimal design is shown as the thick solid line.
This response has larger displacement than the initial design. Finally, the structural
response at the MPP of the optimal design is also shown. Again, significant nonlinearity
in the finite element response is observed. This response is similar to that of the initial
design, but it is not equal to it. This is reasonable, because the limit-state function is
93
0.00
0.08
0.16
0.24
0.32
0.40
0.48
R o o f Drift (m)
Figure 7.6 Structural responses for Case 2 (load factor versus roof displacement)
at: (1) the mean point of the initial design; (2) the MPP of the initial design; (3)
the mean point of the optimal design; and (4) the MPP of the optimal design.
altered by changes in the structural design. The apparent lower value of the stiffness at
the MPP of the optimal design is explained as follows: for the optimal design a greater
reduction of the stiffness is required to "achieve" failure (i.e., to obtain the MPP). Again,
we observe that the optimal design has an acceptable reliability and a reduced total
expected cost. This serves as an indication of the usefulness of the RBDO approach.
Table 7.8 compares the computational cost of the two implemented approaches. The
data in the table come from the first iteration, and are almost the same as the data from
the second and third iterations. The DSA-S approach, requiring 6,195 limit-state function
calls, appears to be more efficient than the DSA-MOOA approach, requiring 22,730
limit-state function calls using the FDM method. We have observed that the nonlinear
94
case requires significantly more effort than the linear case, which only requires 1,227 and
8,559 limit-state function calls. For the linear case, updates in design do not dramatically
change the corresponding M P P of the reliability analysis. There are only five reliability
constraints after 75 loops of tasks B l to B3. For the nonlinear cases, however, the M P P
of the reliability analysis clearly changes when a new design is found. In addition, in
nonlinear cases there are 12 reliability constraints after 75 loops of tasks B l to B3.
DSA-S by F D M
22730
6195
Importance Sampling
109436
109342
7.1.3
Sections
In this section we perform a nonlinear pushover analysis by using the dispBeamColumn
element and the fibre section of OpenSees. To describe a better curvature distribution
along the element, one original element was divided into four elements, with four
integration points along each element. Each column and beam section was discretized
into about 20 fibres to give an "approximately continuous" structural response. A l l of the
fibres were described using the bi-linear concrete
this
nonlinear
case,
18
design
variables
are
collected in the
vector
x = ( 6 , , h , b , h , b , h , b , h , b , h , b , h , A , A , A , A , A , A ) . In addition to b and h
x
()
95
defined in Cases 1 and 2, this case has the area of steel bars A as design variables. The
definitions and initial values of b, h, and A are described in Table 7.9.
"
unconfined concrete
20 fibers
confined concrete
20 fibers
reinforced steel layer
unconfined concrete
2 fibers
C^l'"
feel''"''
fy
^ 1 4 )
We assume that all random variables are lognormally distributed with the means and
coefficients of variation listed in Table 7.10. The random variables are correlated with the
correlation coefficient of 0.7 in several groups. More specifically, we have eight random
variables for confined concrete strength f ' and eight random variables for modulus of
cc
elasticity of confined concrete E . They are assigned to eight types of columns: first
cc
three-storey columns and top three-storey columns on four axes A, B, C, and D. We also
have 14 random variables for unconfined concrete strength f ' and 14 random variables
c
for modulus of elasticity of unconfined concrete E . They are assigned to eight types of
c
columns and six types of beams:firsttwo-storey beams, middle two-storey beams, and
top two-storey beams. In addition, we have 14 random variables for steel bars strength
/
and 14 random variables for modulus of elasticity of steel E assigned to eight types
Table 7.9 Definition and initial values of design variables for Case 3
Variable
Initial Value
b\x h\
0.45x0.45m
A\
0.003m
0.45x0.45m
2
0.003m
63 x hi
0.50x0.50m
A3
0.003m
64X
0.50x0.50m
hi,
0.003m
65X
0.40x0.60m
0.0024m
0.40x0.55m
0.0024m
hs
64X
Description
width and depth of exterior columns offirstthree stories
half of the area of reinforced bars of exterior columns of
first three stories
width and depth of exterior columns of top three stories
half of the area of reinforced bars of exterior columns of top
three stories
width and depth of interior columns offirstthree stories
half of the area of reinforced bars of interior columns of first
three stories
width and depth of interior columns of top three stories
half of the area of reinforced bars of interior columns of top
three stories
width and depth of exterior columns offirstthree stories
area of reinforced bars offirstthree stories' beams
width and depth of exterior columns of top three stories
area of reinforced bars of top three stories' beams
The limit-state function was as defined in the same way as in Eq. (7.1). The objective
function was described in terms of the total volume of the members. Because of the price
difference between two materials in the current market (the price of steel bars per cubic
meter was 100 times the price of the concrete per cubic meter), the volume of steel bars
was accounted for by using its equivalent concrete volume, which was equal to 100 times
the actual volume of the steel bars. Again, the cost of failure was assumed to be five
times the initial volume. This led to the following objective function:
(
+ ^(x)
5( X;
= 1
fM
1 0 0
4) i
L
where Z, represents the total length of the members in each of the six categories identified
in the design vector. The reliability constraint was still prescribed as p (x) < 0.00135.
f
97
The structural constraints were prescribed as 0 < b , h and 0.5 < bilh < 2 to ensure
t
positive dimensions and appropriate aspect ratios, where / = [1, 2, 3, 4, 5, 6]. The
structural constraints for the area of steel bars were 0.01-6,72,< A < 0.02-b^
i
for
columns, where i = [1, 2, 3, 4], and 0.008 b,h, < A, < 0.02 b h for beams, where i - [5,
i
Table 7.10 Statistics of random variables for Case 3 (c.o.v. indicates the coefficient of
variation, and cc. indicates the correlation coefficient)
Variable
H
#3
He
fcc\
fcc%
fc \ "'
Eel
fy\ "'
E\
fc\A
"'E \4
C
fy\A
"'E\4
Mean
28490 kN
48950 kN
70070 kN
89100 kN
109780 kN
131890 kN
c.o.v.
0.15
0.15
0.15
0.15
0.15
0.15
cc
39MPa
0.15
0.7
9750 MPa
0.10
0.7
lognormal
30 MPa
0.15
0.7
15000 MPa
0.10
0.7
lognormal
400 MPa
0.15
0.7
200000 MPa
0.05
0.7
0.7
Type
lognormal
lognormal
lognormal
lognormal
lognormal
lognormal
Description
lateral load on floor 1
lateral load on floor 2
lateral load on floor 3
lateral load on floor 4
lateral load on floor 5
lateral load on roof
modulus of elasticity of
confined concrete
modulus of elasticity of
unconfined concrete
A stand-alone finite element reliability analysis was performed. At the mean realization
of the random variables in Table 7.10, with the initial design in Table 7.9, the lateral
displacement at the roof level was 168mm. The corresponding drift ratio was 168/23100
= 0.73%, which is less than the limit of 2%. A reliability analysis by the FORM resulted
in a reliability index /? = 3.120 and the corresponding failure probability
98
p (x )
f
= 0.000903,
expected cost of the initial design was 130.753m , which was larger than Case 1 and 2,
3
dispBeamColumn
fibre
130.753m to 85.221m , and the failure probability was 0.00135, which satisfied the
3
reliability constraint. Next, an importance sampling based on the new design variables
was performed to get the "real" failure probability with a 2% coefficient of variation (task
A2). The results were 85.327m for the total cost and 0.00160 for the failure probability.
3
This difference between the two failure probabilities (0.00135 from task A l and 0.00160
from task A2) shows the nonlinearity of the structure. The parameter / was updated in
task A3, and the top level of iteration (task A l to A3) was repeated. After two more loops
of tasks A l to A3, the RBDO was stopped when the differences in failure probabilities
between tasks A l and A2 were reduced to an accepted level. The final total cost was
85.663m , and the failure probability was 0.00135.
3
The second optimization analysis was performed using the DSA-S approach. The
approach began from task CI. Convergence was achieved within 1 to 88 iterations for
task Dl, and within 1 to 17 iterations for task D2. We used the same tolerance as in the
DSA-MOOA approach to judge the consistent design (i.e.,
e = 0.\/75
=1.78xl0' ). An
5
optimal design was achieved after five loops of tasks Dl and D2. The DSA-S approach
and the DSA-MOOA approach produced the same design. The total cost was 85.221m ,
3
99
and the failure probability was 0.00135. In the next task, C 2 , the D S A - S approach
produced the same solution as task A 2 of the D S A - M O O A approach using importance
sampling with a 2 % coefficient of variation. A s the D S A - M O O A approach, the top level
of iteration (tasks C I to C3) was repeated for two more loops and the designs were
consistent. The entire optimization procedure was stopped at the total cost of 85.663m
and failure probability of 0.00135.
6,
hx
1.000
0.45
0.45
0.45
0.45
1.0177
0.224
0.449
0.181
1.0182
0.225
0.450
1.0182
0.225
0.450
/?4
hs
0.5
0.5
0.5
0.5
0.4
0.6
0.362
0.306
0.611
0.286
0.572
0.334
0.668
0.182
0.363
0.306
0.612
0.287
0.573
0.335
0.670
0.182
0.363
0.306
0.612
0.287
0.573
0.335
0,670
be
he
A,
1.000
0.4
0.55
0.003
0.003
0.003
0.003
0.0024
0.0024
1.0177
0.334
0.524
0.0010
0.0007
0.0019
0.0016
0.0029
0.0014
1.0182
0.335
0.526
0.0010
0.0007
0.0019
0.0016
0.0029
0.0014
1.0182
0.335
0.526
0.0010
0.0007
0.0019
0.0016
0.0029
0.0014
+c a
c +c p
1.000
0.000903
0.000941
130.753
130.778
1.0177
0.00135
0.00160
85.221
85.327
1.0182
0.00135
0.00136
85.652
85.655
1.0182
0.00135
0.00135
85.663
85.663
100
Tables 7.11a, 7.11b, and 7.11c show the results obtained from the two implemented
approaches. The presented results include the value of 18 design variables, the auxiliary
parameter a, the failure probability p
coefficient of variation, and the total expected costs corresponding to a and p . The first
f
rows show the values of the initial design, while the following rows show the values of
the optimal design. In each of these iterations, parameter t was updated to account for
nonlinearities in the limit-state function. After the first iteration, the value of t was
updated as 1.0xO" (0.00135)/O" (0.00160) = 1.0177. The analysis was carried out for
1
two more iterations. No appreciable difference in the design were observed. In the last
row, a and p converge to the same acceptable value 0.00135. In effect, the objective
f
functions have reached the minimum value, 85.663m . Hence, the design variables in the
3
0.08
0.16
0.24
Roof
0.32
0.40
0.48
Drift (m)
Figure 7.8 Structural responses for Case 3 (load factor versus roof displacement)
at: (1) the mean point of the initial design; (2) the MPP of the initial design; (3)
the mean point of the optimal design; and (4) the MPP of the optimal design.
101
Figure 7.8 shows the structural response of four characteristic realizations of the design
variables and random variables. The responses at the mean realization of the random
variables and at the MPP of the random variables for the initial (original) design are
shown as the thin solid line and the thin dashed line, respectively. The structural
responses at the mean realization of the random variables and at the MPP of the random
variables for the optimal design are shown as the thick solid line and the thick dashed
line, respectively. The figure shows similar properties to the nonlinear case, using
beamWithHinges
elements in Case 2.
DSA-S
DDM
1300
FDM
7618
with
dg/du
91
N/A
33
N/A
with
dg/dx
1547
N/A
208
N/A
Importance Sampling
109436
101848
Table 7.12 compares the efficiency of the FDM and DDM methods, as well as the
computation cost of the two implemented approaches, by measuring the number of calls
to the limit state function. We came to the same conclusion as in the linear case: using the
DDM method to compute the gradients is much more efficient than using the FDM
method, regardless of which optimization approach is adopted.
As shown in Table 7.12, the DSA-S approach, requiring 1,541 (1,300+33+208) limitstate function calls, appears to be more efficient than the DSA-MOOA approach,
requiring 7,109 (5,471+91+1547) limit-state function calls using the DDM method. This
nonlinear case requires much more computational effort than the linear case, 213 and 982
limit-state function calls, respectively. However, the number of limit-state function calls
is almost the same as that required in the nonlinear Case 2.
102
7.2
This section describes in further detail the observations and practical experiences that
have been gained from the case studies presented above. Comparisons are made between
two implemented optimization approaches (DSA-MOOA and DSA-S), between the FDM
and the DDM methods, and between linear and nonlinear pushover analyses. We also
make the observation that the convergence of the optimization procedure is significantly
improved by removing inactive constraints or by properly scaling the functions involved.
7.2.1
C o m p a r i s o n of T w o Optimization
A p p r o a c h e s
Both the DSA-MOOA and the DSA-S approaches are gradient-based algorithms and
decoupled sequential optimization approaches. The reliability analysis and the
optimization analysis are decoupled in them, so the users have the flexibility to choose
any available reliability methods and optimization algorithms according to their
requirements. However, the two approaches have different behaviours with regards to
convergence performance and computational time.
The two approaches use the same problem reformulation. The original problem and the
reformulated problem are proved to be identical in the first-order approximation. The
DSA-MOOA approach considers the reformulated problem as a semi-infinite
optimization problem and solves it using the MOOA algorithm, which has a converged
solution when using an infinite number of reliability constraints. On the other hand, the
DSA-S approach considers the reformulated problem as an inequality constraint
optimization problem and solves it using the Polak-He algorithm, which can only find a
consistent design without the proof of convergence. According to case study results, the
two approaches can achieve the same solution if the analysis is stopped at the same
tolerance.
When the two approaches' convergence speed is compared, it can be seen that the
DSA-S approach is much faster than the DSA-MOOA approach: the former only needs
103
-20% of limit-state function calls of the latter. The DSA-S approach solves the final
optimization problem using a single reliability constraint, while the DSA-MOOA
approach expands the reliability constraints step by step by discretizing the ball with
progressively more points to achieve a gradually precise solution. Hence, 80% of the
computational time in the DSA-MOOA approach is used to deal with the discretization of
points and a progressively larger reformulated problem.
In conclusion, the DSA-S approach is effective and accurate enough. However, if this
approach fails in the converge procedure, the user has to rely on the DSA-MOOA
approach, which is reliable but slow.
7.2.2
C o m p a r i s o n of T w o Gradient C o m p u t a t i o n
M e t h o d s
Two methods of computing response sensitivities in OpenSees are used in case studies:
the FDM and the DDM. This section compares the two methods in light of three
requirements: consistency, accuracy, and efficiency.
Consistency refers to the computed sensitivities being consistent with the
approximations made in computing the response itself. In the DDM, consistency is
ensured through differentiating time- and space-discretized finite element response
equations (Haukaas & Der Kiureghian, 2005). The computation of the structural response
and the response gradient are both conducted in the finite element analysis. On the other
hand, the FDM simply computes the ratio of the structural response difference and
perturbation.
Accuracy is important to response sensitivity, since the convergence of reliability and
optimization algorithms depend on it. The sensitivities computed by ordinary finite
difference may not be sufficiently accurate to guarantee convergence of the solution
algorithms (Haukaas & Der Kiureghian, 2005). The DDM ensures better accuracy than
the FDM, since the DDM evaluates the exact derivatives of the approximate finite
element response.
104
7.2.3
Usually the users choose between linear or nonlinear pushover analyses according to their
requirements and their analysis ability. The comparison in this section shows the possible
results and computational cost for each selected case. This comparison can serve to guide
the users when making the decision about which analysis method to choose.
Table 7.13 presents the comparison between initial and optimal reliability indexes for
the three cases. All three initial reliability indexes are greater than 3.0 regardless of the
analysis model. This implies that the initial design is safe but may not be optimal.
Following the RBDO analysis, the reliability indexes go down to 3.0, which is the lower
bound of the reliability constraint.
The optimal total costs of nonlinear cases are lower than those for the linear case. This
is reasonable, since the linear analysis is based on the "equal displacement principle" and
results in equivalent results, while nonlinear analyses offer more "exact" results.
However, from the structural design point of view, the results of the linear analysis are
also acceptable.
105
Case 1:
Linear
3.646
3.0
1.0
0.715
Case 2: Nonlinear
(beam WithHinges)
Case 3: Nonlinear
(dispBeamColumn
3.536
3.0
1.0
0.688
3.121
3.0
1.0
0.651
982
7047
7109
+ fibre)
We have also compared computational costs for linear and nonlinear cases. In the
linear case, the DSA-MOOA and the DSA-S analyses require only one iteration to
achieve an acceptable design, while in the nonlinear cases three iterations are required. In
thefirstiteration the linear case calls 982 limit-state function calculations, which is about
15% of the number of limit-state function calls required by the nonlinear analysis (about
7,000 calls). Hence, the linear analysis is much more effective than the nonlinear
analysis. For the linear case, with the updating of the design the corresponding MPP of
the reliability analysis does not change dramatically. In addition, there are only five
reliability constraints after 75 loops of tasks B l to B3. On the other hand, the MPP of the
reliability analysis for the nonlinear cases changes apparently when a new design is
found. In addition, there are 12 to 17 reliability constraints after 75 loops of tasks B l to
B3.
In conclusion, nonlinear analyses produce "exact" and trustworthy optimal designs,
while the linear case is effective and its results are also acceptable. It is advisable to
conduct a linear analysis for an optimal design first. If the user really needs a more
"exact" design, the nonlinear analysis can start from the results of the linear analysis.
106
7.2.4
Active a n d Inactive
Constraints
There are three categories of constraints in the reformulated optimization problem: the
deterministic constraints f (x) < 0 , reliability constraints if/ < 0 , and constraints p
<p
A t the optimal design point, the limit-state function is g(d(x,u)) = 0 , which satisfies the
reliability constraints y/ = -g < 0 , making these reliability constraints active. In Tables
7.4, 7.7, and 7.11, all failure probabilities at the optimal design reach the upper bound
p =0.00135, so this constraint is also active.
f
However, by observing the optimal results, we find that some of the deterministic
constraints are not active. For example, in the nonlinear case using
dispBeamColumn
elements and fibre sections only two types of constraints are active in six types of
constraints. In this case, we define the following six constraints:
b>0
b/h<2
h>0
b/h> 0.5
* * P bh
min
where p
m i n
and p
m a x
(7.4)
As < p bh
max
inactive deterministic constraints are removed, the final results are the same as those of
the full constraints, but the computational time is reduced to about 60-80% of the original
time. The reduction in time stems from the reduced size of vector g and matrix G in the
L S S O L analysis. In the two implemented approaches, the time is saved in tasks B 3 and
Dl.
In summary, removing inactive constraints can speed up the optimization procedure.
Once the user finds that some inactive constraints are violated in the analysis, the user
can
stop the analysis and add the constraints back into the optimization to make sure that
107
7.2.5
Scaling
Both tasks B1/D2 and B3/D1 apply the Polak-He algorithm to solve an optimization
problem. The Polak-He algorithm requires the computation of the values and gradients of
the objective function, deterministic constraints, and reliability constraints. It has been
observed that the different ways to define these functions can affect the convergence
speed.
Without scaling, the objective function is about 50m to 130m , the deterministic
3
constraints are between 0.5 and 2, and the reliability constraints are about 0.2 to 1.0.
These values are not in the same order of magnitude. For the nonlinear analysis, task
B1/D2 requires about 1 to 10 iterations to converge, while task B3/D1 requires several
hundred or even thousand iterations to converge. This convergence speed is not
acceptable. If a scaling is applied (scaling all involved functionsobjective function,
deterministic constrains, and reliability constraints) to approximately the same order 10
= 1.0, the new convergence performance in task B1/D2 remains same, but the
computational cost of task B3/D1 is reduced to less than one hundred iterations (and
often less than 10 iterations). Originally, the functions involved in task B1/D2 had the
same order, so scaling did not affect these tasks. The benefit of task B3/D1 is apparent
because only 10% of the original computational time is required.
The gradients of objective and constraint functions cannot be scaled directly. It may not
be possible to conduct proper scaling for both the functions and their gradients. Scaling
the values of functions to 1.0 can approximately scale the gradient in the order of 10.
This is better than the value of function in the order 10 and the gradient in the order 10 .
2
The bigger the difference between the vector and the matrix cells in LSSOL, the more
"ill-conditioned" the problem becomes. This is a general problem that cannot be fixed
easily. It is recommended to do scaling in the beginning of the process, when defining all
of the functions.
As mentioned in Chapter 6, another scaling skill can be used to speed up the
convergence procedure. Parameter 6 = -0 (a) is used in place of parameter a. With
_1
108
reference to Eq. (2.9), parameter b is a substitute for the reliability index /?, in the same
way as parameter a is a substitute for failure probability p (x). This parameter
f
109
Chapter 8
8.1
Conclusions
Domain, Analysis,
and
perform the finite element analysis. OpenSees is then extended with the
ReliabilityDomain
Analysis
thesis further extends OpenSees with optimization capacities by adding several objects to
ReliabilityDomain
DSA-SAnalysis
DSA-MOOAAnalysis
and
optimization problem and maps the design variables into the finite element model. The
analysis part includes two RBDO approaches and several analysis tools. The extended
OpenSees has the capacity to perform finite element analysis, reliability and sensitivity
analyses, and the optimization analysis for comprehensive real-world structures
exhibiting nonlinear behaviour.
A numerical example involving a nonlinear finite element analysis of a three-bay, sixstorey building is used to demonstrate the implementations/In particular, the need for a
continuously differentiable response with respect to thefiniteelement model parameters
is emphasized. The linear pushover analysis using
elasticBeam
beamWithHinges
material model. This issue is cured by using the smooth steel material model, in which a
circular segment starting at 80% of the yield strength is employed to smooth the bi-linear
material. The nonlinear pushover analysis using
dispBeamColumn
elements with
fibre
sections avoids the non-convergence issue by utilizing smooth steel materials and
discretized concretefibresections.
The observations and practical experiences are summarized following numerical case
studies. It was found that the DSA-MOOA and the DSA-S can achieve the same solution.
Yet, while the DSA-MOOA can theoretically prove its convergence, the DSA-S cannot.
When convergence speeds were compared, it was found that the DSA-S only needed
about 20% of limit-state function calls required by the DSA-MOOA. Thus, the DSA-S
112
can find an optimal design more effectively. However, if this algorithm fails in the
convergence procedure, the user must use the more reliable DSA-MOOA.
It was observed that the linear case required only 15% of limit-state function calls used
by nonlinear analyses. Nonlinear analyses produced more "exact" optimal designs, but
the linear case was more effective while also producing acceptable results. It is thus
suggested that the linear analysis be conducted first to find an optimal design. If a more
"exact" design is then required, the nonlinear analysis can begin from the results of the
linear analysis.
Some of the deterministic constraints were observed to be inactive in the optimization
process. Removing them can speed up the optimization procedure and save about 20-40%
of the original computational time. If the user finds that some inactive constraints are
violated in the analysis, the user can stop the analysis and add these constraints back into
the optimization again to make sure the solutions are correct.
Finally due to the use of the Polak-He algorithm, which only has linear convergence
properties, it is necessary to scale all involved functions properly (including the objective
function, deterministic constrains, and reliability constraints) to approximately the same
order 10= 1.0. The benefit of this scaling is apparent in the optimization loop, since only
10% of the original computational time is used. Another scaling is also suggested,
namely using the reliability index instead of the failure probability as the auxiliary
variable in the analysis. This is because the failure probability is too small and does not
appear in the same order of magnitude as the design variables. In summary, the
convergence in the RBDO can be accelerated by properly scaling the involved functions
and by using the reliability index instead of the failure probability.
8.2
F u r t h e r
Studies
2004a). Based on the experience obtained in this thesis, the implementation of series
system problems in the reliability constraints can be achieved in the future work.
The analysis in this work is limited to static pushover finite element analysis. When
considering dynamic finite element analysis, time-variant reliability analysis must be
used to evaluate the failure probability. An available time-variant reliability analysis
method is the mean out-crossing reliability analysis. Furthermore, cyclic loading may
cause the degradation of the structural response. The application of RBDO to such
problems represents an important challenge for further work.
The definitions of initial costs and future costs are here made in terms of structural
volume or weight. More detailed cost computations are desirable. For instance, it is of
interest to include the present cost of future events. Such realistic considerations could be
an interesting future study.
In this thesis, we noticed the importance of proper scaling, which speeds up the
convergence procedure and avoids the nonconvergence problem. However, we only
scaled the values of the involved functions at the beginning of the analysis. In addition,
we did not know how to scale the gradient properly. It is thus suggested to develop an
automatic scaling scheme to scale both the value and the gradient, and scaling them at
each optimization step in future work.
114
Bibliography
Agarwal, H. and Renaud, J. E. (2004). "Decoupled methodology for probabilistic design
optimization." Proceedings of
th
Agarwal, H., Renaud, J. E., and Mack, J. D. (2003). "A decomposition approach for
reliability-based multidisciplinary design optimization." Proceeding of the
AIAA/ASME/ASCE/AHS/ASC
44
th
Benjamin, J. R. and Cornell, C. A. (1970). Probability, statistics, and decision for civil
engineers. McGraw-Hill, New York.
nd
Deitel, H. M. and Deitel, P. J. (1998). C++ How to program. Prentice Hall, Inc., Upper
Saddle River, NJ.
Der Kiureghian, A., Zhang, Y. and Li, CC. (1994). "Inverse reliability problem." Journal
of Engineering Mechanics, ASCE 120:1154-9.
Ditlevsen, O. and Madsen, H. (1996). Structural reliability methods. Wiley, New York,
New York.
Du, X. and Chen, W. (2002). "Sequential optimization and reliability assessment method
for efficient probabilistic design." ASME Design Engineering Technical Conference, 28
th
th
Gasser, M. and Schueller, G. (1998). "Some basic principles in reliability-based optimization (RBO) of structures and mechanical components." Stochastic programming
methods and technical applications, K. Marti and P. Kail (Eds.), Lecture Notes in
116
Haukaas, T. and Der Kiureghian, A. (2004). Finite Element Reliability and Sensitivity
Methods for Performance-Based Engineering. Report No. PEER 2003/14. California:
Kirjner-Neto, C , Polak, E., and Der Kiureghian, A. (1998). "An outer approximations
approach to reliability-based optimal design of structures." J. Optimization Theory and
Application, 98(1), 1-17.
Kuschel, N. and Rackwitz, R. (2000). "A new approach for structural optimization of
series system." Proceedings
of Statistics and
Probability (ICASP) in Civil Engineering Reliability and Risk Analysis, R.E. Melchers
Liu, P.-L. and Der Kiureghian, A. (1986). "Multivariate distribution modes with
prescribed marginals and covariances." Probabilistic engineering mechanics, 1(2), 105112.
Madsen, H. and Friis Hansen, P. (1992). "A comparison of some algorithms for
reliability-based structural optimization and sensitivity analysis." Reliability and
117
118
Sexsmith, R. G. (1983). "Bridge risk assessment and protective design for ship collision."
IABSE Colloquium Copenhagen 1983 - Ship Collision with Bridges and Offshore
Structures, Preliminary Report, V42, 425-433, Copenhagen, Denmark
Symp. on
Multidisciplinary Analysis and Optimization, AIAA Paper 98-4800, St. Louis, Missouri.
Wang, L., Kodiyalam, S. (2002). "An efficient method for probabilistic and robust design
with non-normal distribution." Proceeding of the 43
rd
AIAA/ASME/ASCE/AHS/ASC
Welch, B. B. (2000). Practical programming in Tel and Tk. Prentice Hall, Inc., Upper
Saddle River, New Jersey, 3 edition.
rd
Zhang, Y. and Der Kiureghian, A. (1997). Finite element reliability methods for inelastic
structures. Report No. UCB/SEMM-97/05, Dept. of Civil
and Environmental
119
Al:
In the Polak-He algorithm, a Fortran 77 program (LSSOL) is used to solve a suboptimization problem and to find the search direction. Therefore, a technique to call
Fortran routines from OpenSees (in C++) is required. This section introduces several
methods to implement this mix-program technique, focusing especially on how to pass
and return variables and arrays between C++ and Fortran.
1. The extern "C" directive is used to declare the external Fortran subroutine LSSOL in
C++.
#ifdef_WIN32
extern "C" void LSSOL(int *m, double *c, double *A, double *obj, double *x);
#else
extern "C" void lssol_( int *m, double *c, double *A, double *obj, double *x);
#endif
where m is an input variable, c is an input one-dimensional array, and A is an input
two-dimensional array. They are passed from C++ to Fortran, obj is a returned
variable and x is a returned one-dimensional array. Both of them are returned from
Fortran to C++. Note that variables and arrays listed above are only examples and are
not complete.
2. When C++ calls Fortran, the reference to Fortran symbols is specified in lowercase
letters, since C++ is a case sensitive language, but Fortran is not.
120
3. C++ passes the variables by value, while Fortran passes them by reference. It is
necessary to specify in the C++ that the Fortran subroutines expect call-by-reference
arguments using the address-of operator & (Gobbo, 1999). An example of passing the
variable m = 10 from C++ to Fortran and returning the variable obj - 20.0 from
Fortran to C++ is below:
// Define the variable m and obj
m=10;
obj = 0.0;
// Call LSSOL
#ifdef_WIN32
LSSOL(&m, &obj);
#else
lssol_(&m, &obj);
#endif
As a returned variable, obj = 20.0 is then used directly in C++.
4. C++ passes arrays using pointers, while Fortran passes them using references. In
addition, C++ stores arrays in a row-major order, whereas Fortran stores arrays in a
column-major order. Finally, the lower bound for C++ is 0, but for Fortran it is 1
(Gobbo, 1999). For instance, given an array "fun," the Fortran array element fun(l,l)
is the same as the C++ array element fun[0][0]; the Fortran array element fun(6,8)
corresponds to the C++ array element fun[7][5]. An example of passing onedimensional array c = [l.l 1.2] and two-dimensional array A
"2.1 2.3"
from
2.2 2.4_
C++ to Fortran and returning one-dimensional array x = [3.1 3.2] from Fortran to
C++ is below:
// Preparing input data for LSSOL
c[0]= 1.1;
c[l]= 1.2;
A[0] = 2.1;
A[l] = 2.2;
A[2] = 2.3;
x[0] = 0.0;
x[l] = 0.0;
// Call LSSOL
#ifdef WIN32
121
A[3] = 2.4;
LSSOLfe, A, x);
#else
lssol_(c, A, x);
#endif
As a returned array, x[0] = 3.1 and x[l] = 3.2 is then used directly in C++.
All
Building LSSOL.LIB
LSSOL is complied using Intel(R) visual Fortran compiler for Windows, standard
edition,
which
is
freely
available
from
http://wwwdntel.com/software/products/compilers/dow
the
Intel
website
Compiling
LSSOL * Properties
> Fortran
> Libraries:
Use Common Windows Libraries:
Yes
Yes
No
A3:
7 KB)
(Size:
30 KB)
LIBIFCORE.LIB (Size:
955 KB)
LIBIFPORT.LIB (Size:
412 KB)
Install CVS software (required to download OpenSees from the CVS repository)
1. Downloadfilescvs-1 -11 -5.zip and cvslOgin.bat
2. Unzip cvs-1-1 l-5.zip and run the installation file cvs-1.1 L5.exe
3. Restart the computer
123
1. Open a DOS window (e.g.; Start > Programs > Accessories > Command Prompt)
2. "cd" into the folder where you have put the "cvslogin.bat" file
3. Execute "cvslogin" command
4. Note that steps 2 and 3 above can be replaced by issuing the following commands:
set CVS_RSH=ssh
set CVSROOT=:pserver:anonymous@opensees.berkeley.edu:/usr/local/cvs
cvs login
5. When prompted, provide the password "anonymous"
6. Go to the directory where you want to put the OpenSees code
7. Give the command "cvs checkout OpenSees"
Later, when updating the code with the most recent changes in the CVS repository, you
can follow steps 1 to 6 and then give the command "cvs -q update -d" (-q is used to
suppress output, -d is used to check out any new directories). It may be a good idea to do
this "directory by directory" in the SRC directory. The command "cvs diff' is used to list
differences between local files and the CVS repository files. When doing updates, the
following abbreviations are used to identify the actions taken for each file:
M ~ local copy has been modified
P merged changes on server with the local copy
C conflict with what's on server and the local copy
U check a new file that is not part of local copy
1. Make sure the "include path" for tcl.h is correct in the projects damage, database,
domain, element, material, recorder, reliability, and openSees by doing the
following:
a) Right-click on the project and choose "Settings > C/C++ > Preprocessor"
124
Put all files listed in Tables A.la, A.lb, and A.lc into their respective directories. For
new classes, remember to include the files in the appropriate project according to the
"location offile"provided in the Tables A.la, A.lb, A.lc,.
How to identify the difference between local files and the "official" version at
Berkeley
There are two ways of identifying the difference between the files that have been
modified by the "UBC team" and the official Berkeley files:
1. Download the files, include them in local OpenSees version, and use the "diff"
feature of CVS to see the differences. (Give the command "cvs diff in the relevant
directory.)
2. Search for the text string "UBC Team." All UBC team modifications are marked with
this stamp.
125
Table A . l a N e w and modified classes for extending R B D O (Classes that do not exist in
the " o f f i c i a l " version are marked with *)
Project
L o c a t i o n of file
Files
classTags.h
OpenSees Source
Header
Reliability analysis/types
commands.cpp
commands.h
DSA_MOOAOptimizationAnalysis.cpp*
DSAMOOAOptimizationAnalysis.h*
DSA_SOptimizationAnalysis.cpp*
DSA_SOptimizationAnalysis.h*
domain/components ReliabilityDomain.cpp
ReliabilityDomain.h
ConstraintFunction.cpp*
ConstraintFunction.h*
CostFunction.cpp*
CostFunction.h*
DesignVariable.cpp*
DesignVariable.h*
DesignVariablePositioner.cpp*
DesignVariablePositioner.h*
ObjectiveFunction.cpp*
ObjectiveFunction.h*
126
Table A. lb New and modified classes for extending RBDO (continued) (Classes that do
not exist in the "official" version are marked with *)
Location of file
Files
Project
Reliability analysis/designPoint NonlinSinglelneqOpt.cpp*
NonlinSinglelneqOpt.h*
PolakHeNonlinSinglelneqOpt.cpp*
PolakHeNonlinSinglelneqOpt.h*
NonlinMultilneqOpt.cpp*
NonlinMultilneqOpt.h*
PolakHeNonlinMultilneqOpt.cpp*
PolakHeNonlinMultilneqOpt.h*
LinMultilneqOpt.cpp*
LinMultilneqOpt.h*
LSSOLLinMultilneqOpt.cpp*
LSSOLLinMultiIneqOpt.h*
analysis/gFunction GFunEvaluator.cpp
GFunEvaluator.h
OpenSeesGFunEvaluator.cpp
OpenSeesGFunEvaluator.h
analysis/sensitivity GradGEvaluator.fi
FiniteDifferenceGradGEvaluator.cpp
FiniteDifferenceGradGEvaluator.h
OpenSeesGradGEvaluator.cpp
OpenSeesGradGEvaluator.h
FEsensitivity
SensitivityAlgorithm.cpp
TclReliabilityBuilder.cpp
tcl
Element
Information.cpp
TclElementCommands. cpp
dispBeamColumn DispBeamColumn2d.cpp
beamWithHinges
BeamWithHinges2d_bh.cpp*
BeamWithHinges2d_bh.h*
TclBeamWithHingesBuilder.cpp
elasticBeamColumn ElasticBeam2d_bh.cpp*
ElasticBeam2d_bh.h*
TclElasticBeamCommand.cpp
127
Table A.lc New and modified classes for extending RBDO (continued) (Classes that do
not exist in the "official" version are marked with *)
Location of file
Project
Material uniaxial
section
Files
SteelO lepsy.cpp*
Steel01_epsy.h*
SmoothSteelOlepsy.cpp*
SmoothSteel01_epsy.h*
ElasticPPMaterial_Fy.cpp*
ElasticPPMaterial_Fy.h*
SmoothElasticPPMaterial_Fy.cpp*
SmoothElasticPPMaterialFy.h*
TclModelBuilderUniaxialMaterialCommand.cpp
FiberSection2d.cpp
RCFiberSection2d.cpp*
RCFiberSection2d.h*
TclModelBuilderSectionCommand.cpp
128
This appendix contains the user guide to new implementations of the reliability-based
design optimization (RBDO). It is a complement to the user guide to reliability and
sensitivity analyses in Haukaas and Der Kiureghian (2004). The optimization commands
used in this section have the same format as in Haukaas and Der Kiureghian (2004). An
example of a command is:
commandName argl? arg2? arg3? <arg4? ...>
Bl:
RBDO Modeling
This section describes how to define design variables and functions involved in the
RBDO analysis. The object mapping design variables into the finite element domain is
also introduced.
A design variable object defines design variables by giving their start points through
the following command:
design Variable tag? startPt?
The tag argument indicates the identification number of the design variable. These
objects must be ordered in a consecutive and uninterrupted manner. The startPt
argument allows the user to specify a value for the design variable to be used as the start
point in the search for the design point (Haukaas & Der Kiureghian, 2004).
129
A design variable positioner object is used to map the design variables into structural
properties in thefiniteelement model through the following command:
designVariablePositioner tag? -dvNum dvNum? (...parameter identification...)
The tag argument indicates the identification number of the design variable positioner.
The dvNum argument indicates the identification number of the pre-defined design
variable. The parameter identification alternatives in the command are exactly the same
as in the random variable positioner command in Haukaas and Der Kiureghian (2004).
A constraint function object defines constraint functions using user-defined
expressions through the following command:
constraintFunction tag? "expression"
The tag argument indicates the identification number of the constraint function. The
expression must be enclosed in double quotes and can be any analytical expression that
can be evaluated by the Tcl interpreter (Welch, 2000). This function may be expressed by
various quantities including random variables, design variables, structural response
quantities from an OpenSees finite element analysis, and parameters defined in the Tcl
interpreter (Haukaas & Der Kiureghian, 2004). The syntax used in this command is the
same as that in the performance function command in Haukaas and Der Kiureghian
(2004). An example of syntax for design variables is {d_l}, which means thefirstdesign
variable.
A cost function object defines cost functions using user-defined expressions through
the following command:
costFunction tag? "expression"
The tag argument indicates the identification number of the cost function. The
expression has the same properties as that in the constraint function command. However,
only design variables and parameters defined in the Tcl interpreter are employed as
quantities in this expression.
130
The tag argument indicates the identification number of the objective function. A
standard objective function is created in the following way: objective function = 1 cost
st
function + failure probability x 2 cost function, where the failure probability is passed
nd
from the ReliabiltyDomain each time the objective function object is called.
B2:
Analysis Tools
Before a RBDO analysis is executed the user must create an aggregation of necessary
analysis components or tools. Which analysis components are needed depends on the
analysis type. The order in which the tools are provided is of importance, since some
tools make use of other tools. The user will be notified by an error message if
dependencies are violated (Haukaas & Der Kiureghian, 2004).
A nonlinSinglelneqOpt object is created to be responsible for solving nonlinear single
inequality constrained optimization problems. This object promises to solve the tasks Bl
and D2. The corresponding command reads:
nonlinSinglelneqOpt PolakHe -alpha argl? -beta arg2? -gamma arg3? -delta
arg4?
This type of optimization problem is solved by the Polak-He algorithm. In the PolakHe algorithm, argl denotes the parameter alpha (default = 0.5), arg2 denotes the
parameter beta (default = 0.8), arg3 denotes the parameter gamma (default = 2.0), and
arg4 denotes the parameter delta (default = 1.0).
A linMultilneqOpt
131
linMultilneqOpt LSSOL
In order to find the search direction for tasks B3 and D l , a quadratic sub-optimization
problem with linear constraints must be solved. This sub-optimization problem is fulfilled
by the linMultilneqOpt object. Currently, a Fortran 77 program LSSOL is called to solve
this problem and is the available implementation of this object in OpenSees.
A nonlinMultilneqOpt object is created to be responsible for solving nonlinear multiinequality constrained optimization problems. This object promises to solve tasks B3 and
D l . The corresponding command reads:
nonlinMultilneqOpt
arg4?
This type of optimization problem is solved using the Polak-He algorithm. A quadratic
sub-optimization problem with linear constraints must be solved in this object to find the
search direction. Therefore, a linMultilneqOpt object must be created before the
nonlinMultilneqOpt object can be instantiated, argl to arg4 are user-defined parameters
used in the Polak-He algorithm and have the same definition as the parameters in the
nonlinSinglelneqOpt object.
B3:
Two analysis types are available in the optimization module of OpenSees. This section
describes the corresponding commands to execute them. Required analysis tools must be
specified prior to using any of these commands. During the course of a RBDO analysis,
status information may be printed to a file or to a computer monitor. The complete results
from a successful analysis are printed to an output file whose name is specified by the
user, as show below (Haukaas & Der Kiureghian, 2004).
A DSA-MOOA analysis object is the top-level of the DSA-MOOA approach and is
responsible for obtaining the optimal design by orchestrating tasks A l to A3. This object
is executed using the following command:
132
arg3? -maxlterlnner
arg4? -numSimulation
arg5? -
targetCOV arg6?
The order of arguments is arbitrary, argl denotes the lower bound of failure
probability (default = 3.0), arg2 denotes the target total expect failure cost, arg3 denotes
the maximum number of iterations on top level (A1-A3), and arg4 denotes the maximum
number of iteration in task B3. arg5 and arg6 are input parameters necessary for
importance sampling in task A2. arg5 denotes the maximum number of simulations
(default = 10 ), while arg6 denotes the target coefficient of variation (default = 2%). The
6
arg5? -targetCOV
arg6?
The input data and the results of this analysis type are exactly same as those in the
DSA-MOOA analysis, expect for arg3 and arg4. arg3 denotes the maximum number of
iterations of top level (C1-C3), while arg4 denotes the maximum number of iterations in
taskDl.
133