Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

DSSDR

Download as pdf or txt
Download as pdf or txt
You are on page 1of 100

Tec hni c al Uni ver si t y of Ci vi l Engi neer i ng of Buc har est

Reinforced Concrete Department




2
3
4
5
6
7
1.E+00 1.E+01 1.E+02 1.E+03 1.E+04 1.E+05 1.E+06 1.E+07
99 . 0 = p
5 . 0 = p
T
0



STRUCTURAL RELI ABI LI TY AND RI SK ANALYSI S
Lecture notes




Structural Reliability and Risk Analysis Lecture Notes

2
Foreword

These lecture notes provides an insight into the concepts, methods and procedures of
structural reliability and risk analysis considering the presence of random uncertainties. The
course is addressed to undergraduate students from Faculty of Engineering in Foreign
Languages instructed in English language as well as to postgraduate students in structural
engineering. The objectives of the courses are:
- to provide a review of mathematical tools for quantifying uncertainties using theories
of probability, random variables and random processes;
- to develop the theory of methods of structural reliability based on concept of reliability
indices;
- to explain the basics of code calibration;
- to evaluate actions on buildings and structures due to natural hazards;
- to provide the basics of risk analysis;
- to provide the necessary background to carry out reliability based design and risk-
based decision making and to apply the concepts and methods of performance-based
engineering;
- to prepare the ground for students to undertake research in this field.

The content of the lectures in Structural Reliability and Risk Analysis is:
- Introduction to probability and random variables; distributions of probability
- Formulation of reliability concepts for structural components; exact solutions, first-
order reliability methods; reliability indices; basis for probabilistic design codes
- Seismic hazard analysis
- Introduction to the topic of random processes
- Dynamic stochastic response of single degree of freedom systems applications to
wind and earthquake engineering.


Prof. Radu Vcreanu, Ph.D.
Structural Reliability and Risk Analysis Lecture Notes

3
Table of Contents

1. INTRODUCTION TO RANDOM VARIABLES THEORY ................................................ 5
1.1. Data samples ................................................................................................................... 5
1.2. Sample mean and sample variance .................................................................................. 8
1.3. Probability ....................................................................................................................... 8
1.4. Random variables ............................................................................................................ 9
1.5. Mean and variance of a distribution .............................................................................. 10

2. DISTRIBUTIONS OF PROBABILITY .............................................................................. 13
2.1. Normal distribution ....................................................................................................... 13
2.2. Log-normal distribution ................................................................................................ 16
2.3. Extreme value distributions ........................................................................................... 19
2.3.1. Gumbel distribution for maxima in 1 year ............................................................. 20
2.3.2. Gumbel distribution for maxima in N years ........................................................... 22
2.4. Mean recurrence interval ............................................................................................... 25
2.5. Second order moment models ....................................................................................... 27

3. STRUCTURAL RELIABILITY ANALYSIS ..................................................................... 30
3.1. The basic reliability problem ......................................................................................... 30
3.2. Special case: normal random variables ......................................................................... 32
3.3. Special case: log-normal random variables ................................................................... 33
3.4. Partial safety coefficients .............................................................................................. 35

4. SEISMIC HAZARD ANALYSIS ........................................................................................ 37
4.1. Deterministic seismic hazard analysis (DSHA) ............................................................ 37
4.2. Probabilistic seismic hazard analysis (PSHA) .............................................................. 38
4.3. Earthquake source characterization ............................................................................... 39
4.4. Predictive relationships (attenuation relations) ............................................................. 41
4.5. Temporal uncertainty .................................................................................................... 42
4.6. Probability computations............................................................................................... 42

5. INTRODUCTION TO RANDOM PROCESSES THEORY .............................................. 44
5.1. Background ................................................................................................................... 44
5.2. Average properties for describing internal structure of a random process ................... 45
5.3. Main simplifying assumptions ...................................................................................... 47
5.4. Probability distribution .................................................................................................. 51
5.5. Other practical considerations ....................................................................................... 53

6. POWER SPECTRAL DENSITY OF STATIONARY RANDOM FUNCTIONS .............. 54
6.1. Background and definitions .......................................................................................... 54
6.2. Properties of first and second time derivatives ............................................................. 57
6.3. Frequency content indicators ........................................................................................ 57
6.4. Wide-band and narrow-band random process ............................................................... 59
6.4.1. Wide-band processes. White noise ......................................................................... 59
6.4.2. Narrow band processes ........................................................................................... 61
6.5. Note on the values of frequency content indicators ...................................................... 63
Structural Reliability and Risk Analysis Lecture Notes

4
7. DYNAMIC RESPONSE OF SDOF SYSTEMS TO RANDOM PROCESSES ................. 69
7.1. Introduction ................................................................................................................... 69
7.2. Single degree of freedom (SDOF) systems ................................................................... 70
7.2.1. Time domain .......................................................................................................... 71
7.2.2. Frequency domain .................................................................................................. 71
7.3. Excitation-response relations for stationary random processes .................................... 73
7.3.1. Mean value of the response .................................................................................... 73
7.3.2. Input-output relation for spectral densities ............................................................. 74
7.3.3. Mean square response ............................................................................................ 74
7.4. Response of a SDOF system to stationary random excitation ...................................... 75
7.4.1. Response to band limited white noise .................................................................... 75
7.4.2. SDOF systems with low damping .......................................................................... 76
7.4.3. Distribution of the maximum (peak) response values ............................................ 80

8. STOCHASTIC MODELLING OF WIND ACTION .......................................................... 87
8.1. General .......................................................................................................................... 87
8.2 Reference wind velocity and reference velocity pressure .............................................. 87
8.3 Probabilistic assessment of wind hazard for buildings and structures ........................... 88
8.4 Terrain roughness and Variation of the mean wind with height .................................... 92
8.5. Stochastic modelling of wind turbulence ...................................................................... 94
8.6 Gust factor for velocity pressure .................................................................................... 96
8.7 Exposure factor for peak velocity pressure .................................................................... 97
References ........................................................................................................................ 99
Structural Reliability and Risk Analysis Lecture Notes

5
1. INTRODUCTION TO RANDOM VARIABLES THEORY

1.1. Data samples
If one performs a statistical experiment one usually obtains a sequence of observations. A
typical example is shown in Table 1.1. These data were obtained by making standard tests for
concrete compressive strength. We thus have a sample consisting of 30 sample values, so that
the size of the sample is n=30.

Table 1.1. Sample of 30 values of the compressive strength of concrete, daN/cm
2
320 380 340
350 340 350
370 390 370
320 350 360
380 360 350
420 400 350
360 330 360
360 370 350
370 400 360
340 360 390

The statistical relevance of the information contained in Table 1.1 can be revealed if one shall
order the data in ascending order in Table 1.2 (320, 330 and so on). The number of occurring
figures from Table 1.1 is listed in the second column of Table 1.2. It indicates how often the
corresponding value x occurs in the sample and is called absolute frequency of that value x in
the sample. Dividing it by the size n of the sample one obtains the relative frequency listed in
the third column of Table 1.2.
If for a certain value x one sums all the absolute frequencies corresponding to the sample
values which are smaller than or equal to that x, one obtains the cumulative frequency
corresponding to that x. This yields the values listed in column 4 of Table 1.2. Division by the
size n of the sample yields the cumulative relative frequency in column 5 of Table 1.2.
The graphical representation of the sample values is given by histograms of relative
frequencies and/or of cumulative relative frequencies (Figure 1.1 and Figure 1.2).
If a certain numerical value does not occur in the sample, its frequency is 0. If all the n values
of the sample are numerically equal, then this number has the frequency n and the relative
frequency is 1. Since these are the two extreme possible cases, one has:
- the relative frequency is at least equal to 0 and at most equal to 1;
- the sum of all relative frequencies in a sample equals 1.

Structural Reliability and Risk Analysis Lecture Notes

6
Table 1.2. Frequencies of values of random variable listed in Table 1.1
Compressive
strength
Absolute
frequency
Relative
frequency
Cumulative
frequency
Cumulative
relative
frequency
320 2 0.067 2 0.067
330 1 0.033 3 0.100
340 3 0.100 6 0.200
350 6 0.200 12 0.400
360 7 0.233 19 0.633
370 4 0.133 23 0.767
380 2 0.067 25 0.833
390 2 0.067 27 0.900
400 2 0.067 29 0.967
410 0 0.000 29 0.967
420 1 0.033 30 1.000


0.00
0.05
0.10
0.15
0.20
0.25
320 330 340 350 360 370 380 390 400 410 420
x , daN/cm
2
R
e
l
a
t
i
v
e

f
r
e
q
u
e
n
c
y


Figure 1.1. Histogram of relative frequencies
Structural Reliability and Risk Analysis Lecture Notes

7
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
320 330 340 350 360 370 380 390 400 410 420
x , daN/cm
2
C
u
m
u
l
a
t
i
v
e

r
e
l
a
t
i
v
e

f
r
e
q
u
e
n
c
y


Figure 1.2. Histogram of cumulative relative frequencies

If a sample consists of too many numerically different sample values, the process of grouping
may simplify the tabular and graphical representations, as follows (Kreyszig, 1979).
A sample being given, one chooses an interval I that contains all the sample values. One
subdivides I into subintervals, which are called class intervals. The midpoints of these
subintervals are called class midpoints. The sample values in each such subinterval are said to
form a class. The number of sample values in each such subinterval is called the
corresponding class frequency. Division by the sample size n gives the relative class
frequency. This frequency is called the frequency function of the grouped sample, and the
corresponding cumulative relative class frequency is called the distribution function of the
grouped sample.
If one chooses few classes, the distribution of the grouped sample values becomes simpler but
a lot of information is lost, because the original sample values no longer appear explicitly.
When grouping the sample values the following rules should be obeyed (Kreyszig, 1979):
all the class intervals should have the same length;
the class intervals should be chosen so that the class midpoints correspond to simple
number;
if a sample value x
j
coincides with the common point of two class intervals, one takes
it into the class interval that extends from x
j
to the right.


Structural Reliability and Risk Analysis Lecture Notes

8
1.2. Sample mean and sample variance
One may compute measures for certain properties of the sample, such as the average size of
the sample values, the spread of the sample values, etc.
The mean value of a sample x
1
, x
2
, ,x
n
is denoted by
_
x (or m
x
) and is defined by the
formula:

=
=
n
j
j
x
n
x
1
_
1
(1.1)
The mean value of the sample is simply the sum of all the sample values divided by the size n
of the sample. Obviously, it measures the average size of the sample values, and sometimes
the term average is used for
_
x .
The variance of a sample x
1
, x
2
, ,x
n
is denoted by s
2
and is defined by the formula:

=
n
j
j
x x
n
s
1
2
_
2
) (
1
1
(1.2)
The sample variance is the sum of the squares of the deviations of the sample values from the
mean
_
x , divide by n-1. It measures the spread or dispersion of the sample values and is always
positive. The square root of the sample variance s
2
is called the standard deviation of the
sample and is denoted by s.
The coefficient of variation of a sample x
1
, x
2
, ,x
n
is denoted by COV and is defined as the
ratio of the standard deviation of the sample to the sample mean:
_
x
s
COV = (1.3).

1.3. Probability
A random experiment or random observation is a process that has the following properties,
(Kreyszig, 1979):
it is performed according to a set of rules that determines the performance completely;
it can be repeated arbitrarily often;
the result of each performance depends on chance (that is, on influences which we
cannot control) and therefore can not be uniquely predicted.
The result of a single performance of the experiment is called the outcome of that trial.
The set of all possible outcomes of an experiment is called the sample space of the experiment
and will be denoted by S. Each outcome is called an element or point of S.
Experience shows that most random experiments exhibit statistical regularity or stability of
relative frequencies; that is, in several long sequences of such an experiment the
Structural Reliability and Risk Analysis Lecture Notes

9
corresponding relative frequencies of an event are almost equal. Since most random
experiments exhibit statistical regularity, one may assert that for any event E in such an
experiment there is a number P(E) such that the relative frequency of E in a great number of
performances of the experiment is approximately equal to P(E). For this reason one postulates
the existence of a number P(E) which is called probability of an event E in that random
experiment. Note that this number is not an absolute property of E but refers to a certain
sample space S, that is, to a certain random experiment. The probability thus introduced is the
counterpart of the empirical relative frequency (Kreyszig, 1979).

1.4. Random variables
Roughly speaking, a random variable X (also called variate) is a function whose values are
real numbers and depend on chance (Kreyszig, 1979).
If one performs a random experiment and the event corresponding to a number a occurs, then
we say that in this trial the random variable X corresponding to that experiment has assumed
the value a. The corresponding probability is denoted by P(X=a).
Similarly, the probability of the event X assumes any value in the interval a<X<b is denoted
by P(a<X<b).
The probability of the event X c (X assumes any value smaller than c or equal to c) is
denoted by P(X c), and the probability of the event X>c (X assumes any value greater than c)
is denoted by P(X>c).
The last two events are mutually exclusive:
P(Xc) + P(X>c) = P(- < X < ) =1 (1.4)
The random variables are either discrete or continuous.
If X is any random variable, then for any real number x there exists the probability P(X x)
corresponding to X x (X assumes any value smaller than x or equal to x) that is a function of
x, which is called the cumulative distribution function of X, CDF and is denoted by F(x). Thus
F(x) = P(X x). (1.5)
Since for any a and b > a one has
P(a < X b) = P(X b) - P(X a) (1.6)
it follows that
P(a < X b) = F(b) F(a). (1.7)
One shall now define and consider continuous random variables. A random variable X and the
corresponding distribution are said to be of continuous type or, briefly, continuous if the
corresponding cumulative distribution function F(x) = P(X x) can be represented by an
integral in the form
Structural Reliability and Risk Analysis Lecture Notes

10


=
x
du u f x F ) ( ) ( (1.8)
where the integrand is continuous and is nonnegative. The integrand f is called the probability
density function of X, PDF or, briefly, the density of the distribution. Differentiating one
notices that
F(x) = f(x) (1.9)
In this sense the density is the derivative of the distribution function.
One also has
1 ) ( =


du u f (1.10)
Furthermore, one obtains the formula

= = <
b
a
du u f a F b F b X a P ) ( ) ( ) ( ) ( (1.11)
Hence this probability equals the area under the curve of the density f(x) between x=a and
x=b, as shown in Figure 1.7.
-4 -3 -2 -1 0 1 2 3 4
x
f(x)
b a
P(a<X<b)

Figure 1.3. Example of probability computation

1.5. Mean and variance of a distribution
The mean value or mean of a distribution is denoted by
X
and is defined by
Structural Reliability and Risk Analysis Lecture Notes

11


= dx x xf
X
) ( (1.12)
where f(x) is the density of continuous random variable X. The mean is also known as the
mathematical expectation of X and is sometimes denoted by E(X).
The variance of a distribution is denoted by
X
2
and is defined by the formula


= dx x f x
X X
) ( ) (
2 2
(1.13).
The positive square root of the variance is called the standard deviation and is denoted by
X
.
Roughly speaking, the variance is a measure of the spread or dispersion of the values which
the corresponding random variable X can assume.
The coefficient of variation of a distribution is denoted by V
X
and is defined by the formula
X
X
X
V

= (1.14).
If a random variable X has mean
X
and variance
X
2
, then the corresponding variable
Z = (X -
X
)/
X
has the mean 0 and the variance 1. Z is called the standardized variable
corresponding to X.
The central moment of order i of a distribution is defined as:


= dx x f x
i
X i
) ( ) ( (1.15).
From relations (1.13) and (1.15) it follows that the variance is the central moment of second
order.
The skewness coefficient is defined with the following relation:

( )2
3
2
3
1

= (1.16).
Skewness is a measure of the asymmetry of the probability distribution of a random variable.
The mode of the distribution is the value of the random variable that corresponds to the peak
of the distribution (the most likely value).
The median of the distribution, x
0.5
is the value of the random variable that have 50% chances
of smaller values and, respectively 50% chances of larger values.
The fractile x
p
is defined as the value of the random variable X with p non-exceedance
probability (P(X x
p
) = p).
A distribution is said to be symmetric with respect to a number x = c if for every real x,
f(c+x) = f(c-x). (1.17).
Structural Reliability and Risk Analysis Lecture Notes

12
For a symmetric distribution the mean, the mode and the median are coincident and the
skewness coefficient is equal to zero.
An asymmetric distribution does not comply with relation (1.17). Asymmetric distributions
are with positive asymmetry (skewness coefficient larger than zero) and with negative
asymmetry (skewness coefficient less than zero). Distributions with positive asymmetry have
the peak of the distribution shifted to the left (mode smaller than mean); distributions with
negative asymmetry have the peak of the distribution shifted to the right (mode larger than
mean).
A negative skewness coefficient indicates that the tail on the left side of the probability
density function is longer than the right side and the bulk of the values (including the median)
lies to the right of the mean. A positive skew indicates that the tail on the right side of the
probability density function is longer than the left side and the bulk of the values lie to the left
of the mean. A zero value indicates that the values are relatively evenly distributed on both
sides of the mean, typically implying a symmetric distribution.


Figure 1.4. Asymmetric distributions with positive asymmetry (left) and negative asymmetry
(right) (www.mathwave.com)
Structural Reliability and Risk Analysis Lecture Notes

13
2. DISTRIBUTIONS OF PROBABILITY

2.1. Normal distribution
The continuous distribution having the probability density function, PDF

2
2
1
1
2
1
) (
|
|
.
|

\
|

=
X
X
x
X
e x f

(2.1)
is called the normal distribution or Gauss distribution. A random variable having this
distribution is said to be normal or normally distributed. This distribution is very important,
because many random variables of practical interest are normal or approximately normal or
can be transformed into normal random variables. Furthermore, the normal distribution is a
useful approximation of more complicated distributions.
In Equation 2.6, is the mean and is the standard deviation of the distribution. The curve of
f(x) is called the bell-shaped curve. It is symmetric with respect to . Figure 2.1 shows f(x) for
same and various values of (and various values of coefficient of variation V).
Normal distribution
0
0.002
0.004
0.006
0.008
0.01
0.012
100 200 300 400 500 600
x
f(x)
V=0.10
V=0.20
V=0.30

Figure 2.1. PDFs of the normal distribution for various values of V

The smaller (and V) is, the higher is the peak at x = and the steeper are the descents on
both sides. This agrees with the meaning of variance.
From (2.1) one notices that the normal distribution has the cumulative distribution function,
CDF
Structural Reliability and Risk Analysis Lecture Notes

14
dv x F
x
v
X
e X
X


|
|
.
|

\
|

=
2
2
1
1
2
1
) (

(2.2)
Figure 2.2 shows F(x) for same and various values of (and various values of coefficient of
variation V).
From (2.2) one obtains
dv a F b F b X a P
b
a
v
X
e X
X

|
|
.
|

\
|

= = <
2
2
1
1
2
1
) ( ) ( ) (

(2.3).
The integral in (2.2) cannot be evaluated by elementary methods, but can be represented in
terms of the integral
du e z
z u

=
2
2
2
1
) (

(2.4)
which is the distribution function of the standardized variable with mean 0 and variance 1 and
has been tabulated. In fact, if one sets (v - )/ = u, then du/dv = 1/, and one has to integrate
from - to z = (x - )/.
The density function and the distribution function of the normal distribution with mean 0 and
variance 1 are presented in Figure 2.3.
Normal distribution
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
100 200 300 400 500 600
x
F(x)
V=0.10
V=0.20
V=0.30

Figure 2.2. CDFs of the normal distribution for various values of V

Structural Reliability and Risk Analysis Lecture Notes

15
From (2.2) one obtains du e x F
x u
X

=
/ ) (
2
2
1
2
1
) ( ; drops out, and the expression on the
right equals (2.4) where z = (x -
X
)/
X
, that is,

|
|
.
|

\
|
=
X
X
x
x F


) ( (2.5).
Standard normal
distribution
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
-4 -3 -2 -1 0 1 2 3 4
z
f(z)

Standard normal
distribution
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
-4 -3 -2 -1 0 1 2 3 4
z

(z)

Figure 2.3. PDF and CDF of the normal distribution with mean 0 and variance 1

Structural Reliability and Risk Analysis Lecture Notes

16
From (2.3) and (2.5) one gets:

|
|
.
|

\
|

|
|
.
|

\
|
= = <
X
X
X
X
a b
a F b F b X a P


) ( ) ( ) ( (2.6).
In particular, when a =
X

X
and b =
X
+
X
, the right-hand side equals (1) - (-1);
to a =
X
2
X
and b =
X
+ 2
X
there corresponds the value (2) - (-2), etc. Using
tabulated values of function one thus finds
(a) P(
X
-
X
< X
X
+
X
) 68%
(b) P(
X
-2
X
< X
X
+2
X
) 95.5% (2.7).
(c) P(
X
-3
X
< X
X
+3
X
) 99.7%
Hence one may expect that a large number of observed values of a normal random variable X
will be distributed as follows:
(a) About 2/3 of the values will lie between
X
-
X
and
X
+
X

(b) About 95% of the values will lie between
X
-2
X
and
X
+2
X

(c) About 99 % of the values will lie between
X
-3
X
and
X
+3
X
.
Practically speaking, this means that all the values will lie between
X
-3
X
and
X
+3
X
;
these two numbers are called three-sigma limits.
The fractile x
p
that is defined as the value of the random variable X with p non-exceedance
probability (P(X x
p
) = p) is computed as follows:
x
p
=
X
+k
p

X
(2.8).
The meaning of k
p
becomes clear if one refers to the reduced standard variable z = (x - )/.
Thus, x = +z and k
p
represents the value of the reduced standard variable for which (z)
=p.
The most common values of k
p
are given in Table 2.1.

Table 2.1. Values of k
p
for different non-exceedance probabilities p
p 0.01 0.02 0.05 0.95 0.98 0.99
k
p
-2.326 -2.054 -1.645 1.645 2.054 2.326


2.2. Log-normal distribution
The log-normal distribution (Hahn & Shapiro, 1967) is defined by its following property: if
the random variable lnX is normally distributed with mean
lnX
and standard deviation
lnX
,
then the random variable X is log-normally distributed. Thus, the cumulative distribution
function CDF of random variable lnX is of normal type:
Structural Reliability and Risk Analysis Lecture Notes

17
dv
v
1
e
1
2
1
) v (ln d e
1
2
1
) x (ln F
x
v ln
2
1
X ln
x ln
v ln
2
1
X ln
2
X ln
X ln
2
X ln
X ln

=

|
|
.
|

\
|


|
|
.
|

\
|

(2.9).
Since:

=

x
dv ) v ( f ) x (ln F (2.10)
the probability density function PDF results from (2.9) and (2.10):
2
X ln
X ln
x ln
2
1
X ln
e
x
1 1
2
1
) x ( f
|
|
.
|

\
|

(2.11).
The lognormal distribution is asymmetric with positive asymmetry, i.e. the peak of the
distribution is shifted to the left. The skewness coefficient for lognormal distribution is:
3
1
3
X X
V V + = (2.12)
where V
X
is the coefficient of variation of random variable X. Higher the variability, higher
the shift of the lognormal distribution.
The mean and the standard deviation of the random variable lnX are related to the mean and
the standard deviation of the random variable X as follows:

2
ln
1
ln
X
X
X
V
m
m
+
= (2.13)
) 1 ln(
2
ln X X
V + = (2.14).
In the case of the lognormal distribution the following relation between the mean and the
median holds true:

5 . 0 ln
lnx m
X
= (2.15).
Combining (2.13) and (2.15) it follows that the median of the lognormal distributions is
linked to the mean value by the following relation:

2
5 . 0
1
X
X
V
m
x
+
= (2.16)
If V
X
is small enough (V
X
0.1), then:
X X
m m ln
ln
(2.17)
X X
V
ln
(2.18)
The PDF and the CDF of the random variable X are presented in Figure 2.4 for different
coefficients of variation.
Structural Reliability and Risk Analysis Lecture Notes

18
Log-normal distribution
0
0.002
0.004
0.006
0.008
0.01
0.012
100 200 300 400 500 600
x
f(x)
V=0.10
V=0.20
V=0.30
Log-normal distribution
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
100 200 300 400 500 600
x
F(x)
V=0.10
V=0.20
V=0.30

Figure 2.4. Probability density function, f(x) and cumulative distribution function, F(x)
of the log-normal distribution for various values of V

If one uses the reduced variable (lnv - )/ = u, then du/dv = 1/(v), and one has to integrate
from - to z = (lnx - )/. From (2.4) one obtains:
du e vdu
v
e z
z u x u
X

= =
2
/ ) (ln
2
2 2
2
1 1 1
2
1
) (


(2.19)
Structural Reliability and Risk Analysis Lecture Notes

19
The fractile x
p
that is defined as the value of the random variable X with p non-exceedance
probability (P(X x
p
) = p) is computed as follows, given lnX normally distributed:
ln(x
p
) =
lnX
+k
p

lnX
(2.20)
From (2.20) one gets:

X p X
k
p
e x
ln ln
+
= (2.21)
where k
p
represents the value of the reduced standard variable for which (z) =p.

2.3. Extreme value distributions
The field of extreme value theory was pioneered by Leonard Tippett (19021985). Emil
Gumbel codified the theory of extreme values in his book Statistics of extremes published
in 1958 at Columbia University Press.
Extreme value distributions are the limiting distributions for the minimum or the maximum of
a very large collection of random observations from the same arbitrary distribution. The
extreme value distributions are of interest especially when one deals with natural hazards like
snow, wind, temperature, floods, etc. In all the previously mentioned cases one is not
interested in the distribution of all values but in the distribution of extreme values which
might be the minimum or the maximum values. In Figure 2.5 it is represented the distribution
of all values of the random variable X as well as the distribution of minima and maxima of X.
0
0.001
0.002
0.003
0.004
0.005
0.006
0.007
0.008
0.009
0 100 200 300 400 500 600 700 800 900 1000
x
f(x)
all values
distribution
minima
distribution
maxima
distribution

Figure 2.5. Distribution of all values, of minima and of maxima of random variable X
Structural Reliability and Risk Analysis Lecture Notes

20
2.3.1. Gumbel distribution for maxima in 1 year
The Gumbel distribution for maxima is defined by its cumulative distribution function, CDF:

) u x (
e
e ) x ( F

=

(2.22)
where:
u =
x
0.45
x
mode of the distribution (Figure 2.7)
= 1.282 /
x
dispersion coefficient.
The skewness coefficient of Gumbel distribution is positive constant ( 139 . 1
1
= ), i.e. the
distribution is shifted to the left. In Figure 2.5 it is represented the CDF of Gumbel
distribution for maxima for the random variable X with the same mean
x
and different
coefficients of variation V
x
.
The probability distribution function, PDF is obtained straightforward from (2.22):
) u x (
e ) u x (
e e
dx
) x ( dF
) x ( f


= =

(2.23)
The PDF of Gumbel distribution for maxima for the random variable X with the same mean

x
and different coefficients of variation V
x
is represented in Figure 2.6.

Gumbel distribution
for maxima in 1 year
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
100 200 300 400 500 600 700
x
F(x)
V=0.10
V=0.20
V=0.30

Figure 2.5. CDF of Gumbel distribution for maxima for the random variable X
with the same mean
x
and different coefficients of variation V
x

Structural Reliability and Risk Analysis Lecture Notes

21
Gumbel distribution
for maxima in 1 year
0
0.002
0.004
0.006
0.008
0.01
0.012
0.014
100 200 300 400 500 600 700
x
f(x)
V=0.10
V=0.20
V=0.30

Figure 2.6. PDF of Gumbel distribution for maxima for the random variable X
with the same mean
x
and different coefficients of variation V
x


One can notice in Figure 2.6 that higher the variability of the random variable, higher the shift
to the left of the PDF.
0
0.0005
0.001
0.0015
0.002
0.0025
0.003
0.0035
0.004
0.0045
0.005
100 200 300 400 500 600 700
x
f(x)
0.45

x
u

x

Figure 2.7. Parameter u in Gumbel distribution for maxima

Structural Reliability and Risk Analysis Lecture Notes

22
Given X follows Gumbel distribution for maxima, the fractile x
p
that is defined as the value of
the random variable X with p non-exceedance probability (P(X x
p
) = p) is computed as
follows,:
) u
p
x (
e
p p
e p ) x X ( P ) x ( F

= = =

(2.24).
From Equation 2.24 it follows:

x
G
p x
x
x x p
k ) p ln ln(
282 . 1
45 . 0 ) p ln ln(
1
u x

+ = = = (2.25)
where:
) p ln ln( 78 . 0 45 . 0 k
G
p
= (2.26).
The values of k
p
G
for different non-exceedance probabilities are given in Table 2.2.

Table 2.2. Values of k
p
G
for different non-exceedance probabilities p
p 0.50 0.90 0.95 0.98
k
p
G
-0.164 1.305 1.866 2.593


2.3.2. Gumbel distribution for maxima in N years
All the previous developments are valid for the distribution of maximum yearly values. If one
considers the probability distribution in N (N>1) years, the following relation holds true (if
one considers that the occurrences of maxima are independent events):
F(x)
N years
= P(X x) in N years = [P(X x) in 1 year]
N
= [F(x)
1 year
]
N
(2.27)
where:
F(x)
N years
CDF of random variable X in N years
F(x)
1 year
CDF of random variable X in 1 year.
The Gumbel distribution for maxima has a very important property the reproducibility of
Gumbel distribution - i.e., if the annual maxima (in 1 year) follow a Gumbel distribution for
maxima then the maxima in N years will also follow a Gumbel distribution for maxima:
( ) ( ) = = = =


)
1
(
1
)
1
(
1
1
) ( ) (
u x u x
Ne
N
e N
N
e e x F x F


) (
)
1
ln
1
( (
1
ln )
1
(
1
N
u x
N
N
u x
N u x
e e e
e e e

+
+

= = =

(2.28)
where:
u
1
mode of the distribution in 1 year

1
dispersion coefficient in 1 year
Structural Reliability and Risk Analysis Lecture Notes

23
u
N
= u
1
+ lnN /
1
mode of the distribution in N years


=
1
dispersion coefficient in N years
The PDF of Gumbel distribution for maxima in N years is translated to the right with the
amount lnN /
1
with respect to the PDF of Gumbel distribution for maxima in 1 year, Figure
2.8.
Also, the CDF of Gumbel distribution for maxima in N years is translated to the right with the
amount lnN /
1
with respect to the CDF of Gumbel distribution for maxima in 1 year, Figure
2.9.
Important notice: The superior fractile x
p
(p >>0.5) calculated with Gumbel distribution for
maxima in 1 year becomes a frequent value (sometimes even an inferior fractile if N is large,
N 50) if Gumbel distribution for maxima in N years is employed, Figure 2.10.

Gumbel distribution
for maxima
0
0.0021
0.0042
0.0063
100 200 300 400 500 600 700 800 900 1000
x
f(x)
N yr.
1 yr.
u
1
u
N
lnN /

1
m
1
m
N
lnN /

1

Figure 2.8. PDF of Gumbel distribution for maxima in 1 year and in N years

Structural Reliability and Risk Analysis Lecture Notes

24
Gumbel distribution
for maxima
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
100 200 300 400 500 600 700 800 900 1000
x
F(x)
N yr.
1 yr.

Figure 2.9. CDF of Gumbel distribution for maxima in 1 year and in N years



Figure 2.10. Superior fractile x
p
in 1 year and its significance in N year

Structural Reliability and Risk Analysis Lecture Notes

25
2.4. Mean recurrence interval
The loads due to natural hazards such as earthquakes, winds, waves, floods were recognized
as having randomness in time as well as in space. The randomness in time was considered in
terms of the return period or recurrence interval. The recurrence interval also known as a
return period is defined as the average (or expected) time between two successive statistically
independent events and it is an estimate of the likelihood of events like an earthquake, flood
or river discharge flow of a certain intensity or size. It is a statistical measurement denoting
the average recurrence interval over an extended period of time. The actual time T between
events is a random variable.
The mean recurrence interval, MRI of a value larger than x of the random variable X may be
defined as follows:

( ) ( ) x F p x X P
x X MRI
x year

=

=
>
= >
1
1
1
1 1
) (
1
(2.29)
where:
p is the annual probability of the event (Xx)
F
X
(x) is the cumulative distribution function of X.
Thus the mean recurrence interval of a value x is equal to the reciprocal of the annual
probability of exceedance of the value x. The mean recurrence interval or return period has an
inverse relationship with the probability that the event will be exceeded in any one year. For
example, a 10-year flood has a 0.1 or 10% chance of being exceeded in any one year and a
50-year flood has a 0.02 (2%) chance of being exceeded in any one year. It is commonly
assumed that a 10-year earthquake will occur, on average, once every 10 years and that a 100-
year earthquake is so large that we expect it only to occur every 100 years. While this may be
statistically true over thousands of years, it is incorrect to think of the return period in this
way. The term return period is actually misleading. It does not necessarily mean that the
design earthquake of a 10 year return period will return every 10 years. It could, in fact, never
occur, or occur twice. This is why the term return period is gradually replaced by the term
recurrence interval. Researchers proposed to use the term return period in relation with the
effects and to use the term recurrence interval in relation with the causes.
The mean recurrence interval is often related with the exceedance probability in N years. The
relation among MRI, N and the exceedance probability in N years, P
exc,N
is:

MRI
N
N exc
e P

1
,
(2.30)
Usually the number of years, N is considered equal to the lifetime of ordinary buildings, i.e.
50 years. Table 2.3 shows the results of relation (2.30) for some particular cases considering
N=50 years.



Structural Reliability and Risk Analysis Lecture Notes

26
Table 2.3 Correspondence amongst MRI, P
exc,1 year
and P
exc,50 years

Mean recurrence
interval, years
MRI
Probability of
exceedance in 1
year
P
exc,1 year

Probability of
exceedance in 50
years
P
exc,50 years

10 0.10 0.99
30 0.03 0.81
50 0.02 0.63
100 0.01 0.39
225 0.004 0.20
475 0.002 0.10
975 0.001 0.05
2475 0.0004 0.02

The modern earthquake resistant design codes consider the definition of the seismic hazard
level based on the probability of exceedance in 50 years. The seismic hazard due to ground
shaking is defined as horizontal peak ground acceleration, elastic acceleration response
spectra or acceleration time-histories. The level of seismic hazard is expressed by the mean
recurrence interval (mean return period) of the design horizontal peak ground acceleration or,
alternatively by the probability of exceedance of the design horizontal peak ground
acceleration in 50 years. Four levels of seismic hazard are considered in FEMA 356
Prestandard and Commentary for the Seismic Rehabilitation of Buildings, as given in Table
2.4. The correspondence between the mean recurrence interval and the probability of
exceedance in 50 years, based on Poisson assumption, is also given in Table 2.4.

Table 2.4. Correspondence between mean recurrence interval and probability of exceedance
in 50 years of design horizontal peak ground acceleration as in FEMA 356
Seismic Hazard
Level
Mean recurrence interval
(years)
Probability of
exceedance
SHL1
SHL2
SHL3
SHL4
72
225
475
2475
50% in 50 years
20 % in 50 years
10 % in 50 years
2 % in 50 years





Structural Reliability and Risk Analysis Lecture Notes

27
2.5. Second order moment models
Let us consider a simply supported beam, Figure 2.11:
l
q

Figure 2.11. Simple supported beam

The design condition (ultimate limit state condition) is:
cap
M M =
max

W
ql
y
=
8
2

Considering that:
y
q , are described probabilistically &
l, W are described deterministically
and considering
S sectional effect of load
R sectional resistance
it follows that:
8
2
ql
S = ; W R
y
= .
The following question rises:
If q and
y
are described probabilistically, how can one describes S and R probabilistically?
To answer the question, two cases are considered in the following:
1. The relation between q and S (
y
and R) is linear
2. The relation is non linear.

Case 1: Linear relation between random variables, X, Y
b X a Y X + =
Structural Reliability and Risk Analysis Lecture Notes

28
For the random variable X one knows: the probability density function, PDF, the cumulative
distribution function, CDF, the mean, m and the standard deviation, . The unknowns are the
PDF, CDF, m and for the random variableY.
If one applies the equal probability formula, Figure 2.12:
) ( ) ( ) ( ) ( dy y Y y P dx x X x P dy y f dx x f
Y X
+ < = + < = (2.31)
b
a y
x

=
|
.
|

\
|
= = =
a
b y
f
a
x f
a
dx
dy
x f y f
X X X Y
1
) (
1 1
) ( ) ( (2.32)
Distribution of YDistribution of X

Developing further the linear relation it follows that:

+

+

+

+

+ = + = + = = b m a dx x f b dx x xf a dx x f b x a dy y yf m
X X X X Y Y
) ( ) ( ) ( ) ( ) (
= + = =

+

+

dx f b m a b x a dy y f m y
X X Y Y Y
2 2 2
) ( ) ( ) (

+

= =
2 2 2 2
) ( ) (
Y X X
a dx x f m x a
b m a m
a
X Y
X Y
+ =
=
(2.33)

b m a
a
m
V
X
X
Y
Y
Y
+

= =




y
=
a
x
+
b
x
dx x +
y
dy y +
dx x f
X
) ( dy y f
Y
) (
) (x f
X
? ) ( = y f
Y
x
y

Figure 2.12. Linear relation between random variables X and Y
Structural Reliability and Risk Analysis Lecture Notes

29
Observations:
1. Let the random variable
n
X X X X Y + + + + =
3 2 1
. If the random variables X
i
are
normally distributed then Y is also normally distributed.
2. Let the random variable X X X X Y =
3 2 1
. If the random variables X
i
are log-
normally distributed then Y is also log-normally distributed.

Case 2: Non-linear relation between random variables, X, Y
Let the random variables X
i
with known means and standard deviations:
n i
m
X
i
i
X
X
i
, 1 ; =


Let the random variable ) , , , , (
2 1 n i
X X X X Y Y = , the relation being non-linear. If one
develops the function Y in Taylor series around the mean and keeps only the first order term
of the series, then the mean and the standard deviation of the new random variable Y,
Y Y
m , are approximated by:
( )
n
X X X Y
m m m Y m , , ,
2 1
= (2.34)

=

|
|
.
|

\
|

=
|
|
.
|

\
|

+ +
|
|
.
|

\
|

+
|
|
.
|

\
|

n
i
i
X
i
n
X
n
X X Y
x
y
x
y
x
y
x
y
1
2
2
2
2
2
2
2
2
2
1
2
1
2
(2.35)
Relations 2.34 and 2.35 are the basis for the so-called First Order Second Moment Models,
FOSM.
Few examples of FOSM are provided in the following:
b X a Y + =
b m a m
X Y
+ =
2 2 2
X Y
a =
X Y
a =

2 1
X X Y =
2 1
X X Y
m m m =
( ) ( )
2
2
2
1
2
1
2
2
2
2
2
1
2
1
2
2
2
X X X X X X Y
m m X X + = + =

2
2
2
1
2
2
2
2
2
1
2
1
2 1
2
2
2
1
2
1
2
2
X X
X
X
X
X
X X
X X X X
Y
Y
Y
V V
m m m m
m m
m
V + = + =

+
= =


Structural Reliability and Risk Analysis Lecture Notes

30
3. STRUCTURAL RELIABILITY ANALYSIS

3.1. The basic reliability problem
The basic structural reliability problem considers only one load effect S resisted by one
resistance R. Each is described by a known probability density function, f
S
( ) and f
R
( )
respectively. It is important to notice that R and S are expressed in the same units.
For convenience, but without loss of generality, only the safety of a structural element will be
considered here and as usual, that structural element will be considered to have failed if its
resistance R is less than the load effect S acting on it. The probability of failure P
f
of the
structural element can be stated in any of the following ways, (Melchers, 1999):
P
f
= P(RS) (3.1a)
=P(R-S0) (3.1b)
=P(R/S1) (3.1c)
=P(ln R-ln S0) (3.1d)
or, in general
=P(G(R ,S)0) (3.1e)
where G( ) is termed the limit state function and the probability of failure is identical with the
probability of limit state violation.
Quite general density functions f
R
and f
S
for R and S respectively are shown in Figure 3.1
together with the joint (bivariate) density function f
RS
(r,s). For any infinitesimal element
(rs) the latter represents the probability that R takes on a value between r and r+r and S a
value between s and s+s as r and s each approach zero. In Figure 3.1, the Equations (3.1)
are represented by the hatched failure domain D, so that the probability of failure becomes:
( ) ( ) ds dr s r f S R P P
D
RS f

= = , 0 (3.2).
When R and S are independent, f
RS
(r,s)=f
R
(r)f
S
(s) and relation (3.2) becomes:
( ) ( ) ( ) ( ) ( )



= = = dx x f x F ds dr s f r f S R P P
S R
r s
S R f
0 (3.3)
Relation (3.3) is also known as a convolution integral with meaning easily explained by
reference to Figure 3.2. F
R
(x) is the probability that Rx or the probability that the actual
resistance R of the member is less than some value x. Let this represent failure. The term f
s
(x)
represents the probability that the load effect S acting in the member has a value between x
and x+x as x 0. By considering all possible values of x, i.e. by taking the integral over all
x, the total probability of failure is obtained. This is also seen in Figure 3.3 where the density
functions f
R
(r) and f
S
(s) have been drawn along the same axis.

Structural Reliability and Risk Analysis Lecture Notes

31

Figure 3.1. J oint density function f
RS
(r,s), marginal density functions f
R
(r) and f
S
(s)
and failure domain D, (Melchers, 1999)

0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
-4 -3 -2 -1 0 1 2 3 4 5 6 7 8 9 10 11
x
F
R
(x), f
S
(x)
R
S
R=x
P(R<x)

Figure 3.2. Basic R-S problem: F
R
( ) f
S
( ) representation

Structural Reliability and Risk Analysis Lecture Notes

32

0
0.0021
0.0042
0.0063
100 200 300 400 500 600 700 800 900 1000
x
f
R
(x), f
S
(x)
R
S
amount of overlap
of f
R
( ) and f
S
()
rough indicator of p
f


Figure 3.3. Basic R-S problem: f
R
( ) f
S
( ) representation

An alternative to expression (3.3) is:
| |


= dx x f x F P
R S f
) ( ) ( 1 (3.4)
which is simply the sum of the failure probabilities over all cases of resistance for which the
load effect exceeds the resistance.

3.2. Special case: normal random variables
For a few distributions of R and S it is possible to integrate the convolution integral (3.3)
analytically. One notable example is when both are normal random variables with means
R

and
S
and variances
R
2
and
S
2
respectively. The safety margin Z=R-S then has a mean and
variance given by rules for addition of normal random variables:

Z
=
R
-
S
(3.5a)

Z
2
=
R
2
+
S
2
(3.5b)
Equation (3.1b) then becomes
( ) ( )
|
|
.
|

\
|
= = =
Z
Z
f
Z P S R P P

0
0 0 (3.6)
Structural Reliability and Risk Analysis Lecture Notes

33
where ( ) is the standard normal distribution function (for the standard normal variate with
zero mean and unit variance). The random variable Z = R - S is shown in Figure 3.4, in which
the failure region Z 0 is shown shaded. Using (3.5) and (3.6) it follows that (Cornell, 1969)

( )
( )
( )


=
(
(

+

=
2 / 1
2 2
S R
S R
f
P (3.7)
where =
z
/
z
is defined as reliability (safety) index.
If either of the standard deviations
R
and
S
or both are increased, the term in square brackets
in (3.7) will become smaller and hence P
f
will increase. Similarly, if the difference between
the mean of the load effect and the mean of the resistance is reduced, P
f
increases. These
observations may be deduced also from Figure 3.3, taking the amount of overlap of f
R
( ) and
f
S
() as a rough indicator of P
f
.

0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
-60 -50 -40 -30 -20 -10 0 10 20 z
f
Z
(z)
z

z
0
P
f
Z<0 Z>0
Failure Safety

Figure 3.4. Distribution of safety margin Z = R S

3.3. Special case: log-normal random variables
The log-normal model for structural reliability analysis was proposed by (Rosenbluth &
Esteva, 1972). Both parameters of the model are considered normal random variables with
Structural Reliability and Risk Analysis Lecture Notes

34
means
R
and
S
and variances
R
2
and
S
2
respectively. The safety margin Z=
S
R
ln then has a
mean and a standard deviation given by:

S
R
S
R Z

ln
ln
= (3.8a)

2 2
ln
S R
S
R
S
R Z
V V V + = = = (3.8b)
Relation (3.1d) then becomes
( )
|
|
.
|

\
|
= = |
.
|

\
|
=
Z
Z
f
Z P
S
R
P P

0
0 0 ln (3.9)
where ( ) is the standard normal distribution function (zero mean and unit variance). The
random variable Z=
S
R
ln is shown in Figure 3.5, in which the failure region Z 0 is shown
shaded.


0
0.005
0.01
0.015
0.02
0.025
0.03
0.035
-60 -50 -40 -30 -20 -10 0 10 20 z
f
Z
(z)
z

z
0
P
f
Z<0 Z>0
Failure Safety

Figure 3.5. Distribution of safety margin Z=
S
R
ln
Using (3.8) and (3.9) it follows that
Structural Reliability and Risk Analysis Lecture Notes

35

( )
( )

=
(
(
(
(

=
2 / 1
2 2
ln 0
S R
S
R
f
V V
P (3.10)
where =
z
/
z
is defined as reliability (safety) index,
2 2
ln
S R
S
R
V V +

. (3.11)
Lindt proposed the following liniarization: ( )
R S R S
V V V V + +
2 2
with 75 , 0 7 , 0 = for
3
3
1

S
R
V
V
.
Given Lindts linearization it follows that:

( )
S R
S
R
V V +
=

ln
(3.12).

3.4. Partial safety coefficients
The calibration of partial safety coefficients used in semi-probabilistic design codes is
accomplished using the log-normal model using FOSM.
From Equation 3.11 one has:
( )
( )
= =
+
=
+
S R S R
V V
S
R
V V
S
R
S R
S
R
e e
m
m
e
m
m
V V
m
m

ln
S R
V
S
V
R
e m e m

=

(3.13)

where
R
V
e

and
S
V
e

are called safety coefficients, SC with respect to the mean. But one
needs the SC with respect to the characteristic values of the loads and resistances, the so-
called partial safety coefficients, PSC. To this aim, one defines the limit state function used in
the design process:


design design
R
R
S
S
05 . 0 05 . 0 98 . 0 98 . 0
= (3.14)
where
98 . 0
and
05 . 0
are called partial safety coefficients, PSC.
Assuming that S and R are log-normally distributed, one has:
R R
V
R
V
R
m
R R
m
e m e e e R


= =
645 . 1 645 . 1 ln
ln
645 . 1
ln
05 . 0

(3.15)
Structural Reliability and Risk Analysis Lecture Notes

36
S S
V
S
V
S
m
S S
m
e m e e e S

+
= =
054 . 2 054 . 2
ln
ln
054 . 2
ln
98 . 0

(3.16)
S
R
S
R
V
V
S
V
R
V
V
V
S
R
e
e e e
e
e
m
m
S
R



= =
054 . 2
645 . 1
054 . 2
645 . 1
98 . 0
05 . 0

(3.17)

98 . 0
) 054 . 2 (
98 . 0
05 . 0
) 645 . 1 (
05 . 0


S
V
R
V
e S e R
+
= (3.18)
R
V
e
) 645 . 1 (
05 . 0
+
=

(3.19)
S
V
e
) 054 . 2 (
98 . 0

=

(3.20)
The partial safety coefficients
05 . 0
and
98 . 0
as defined by Equations 3.19 and 3.20 depend on
the reliability index and the coefficients of variation for resistances and loads, respectively.
If the reliability index is increased, the partial safety coefficient for loads
98 . 0
increases
while the partial safety coefficient for resistance
05 . 0
decreases. The theory of partial safety
coefficients based on lognormal model is incorporated in the Romanian Code CR0-2005
named Cod de proiectare. Bazele proiectarii structurilor in constructii (Design Code. Basis
of Structural Design).

1.0
1.2
1.4
1.6
1.8
2.0
0 0.2 0.4 0.6

0.98
V
S

0.6
0.7
0.8
0.9
0.05 0.1 0.15 0.2

0.05
V
R

Figure 3.6. Variation of partial safety coefficients with the coefficient of variation of load
effect (left) and of resistance (right)

Structural Reliability and Risk Analysis Lecture Notes

37
4. SEISMIC HAZARD ANALYSIS

4.1. Deterministic seismic hazard analysis (DSHA)
The deterministic seismic hazard analysis involves the development of a particular seismic
scenario, i.e. the postulated occurrence of an earthquake of a specified size at a specific
location. The DSHA is developed in four steps (Reiter, 1990):
1. Identification and characterization of all earthquake sources geometry and position
of the sources and the maximum magnitude for all sources, M;
2. Selection of source-to-site distance parameters, R (epicentral, hypocentral, etc.) for
each source zone;
3. Selection of controlling earthquake (expected to produce the strongest shaking at the
site); use attenuation relations for computing ground motion parameters produced at
the site by earthquakes of magnitudes given in step 1 occurring at each source zone;
4. Define the seismic hazard at the site in terms of peak ground acceleration PGA,
spectral acceleration SA, peak ground velocity PGV, etc (Y parameter).
The steps are represented in Figure 4.1.

Step 1
Ground
Motion
Parameter,
Y
R
2
R
3
R
1

Site
Source 1
M
1

Source 3
M
3

Source 2
M
2

R
1
R
3
R
2
Step 2
Step 3
Step 4
Distance
M
2

M
1

M
3


Figure 4.1. Steps in DSHA, (Kramer, 1996)

=
N
Y
Y
Y
Y
Y

3
2
1
Structural Reliability and Risk Analysis Lecture Notes

38
4.2. Probabilistic seismic hazard analysis (PSHA)
The PSHA (Cornell, 1968) is developed in four steps (Reiter, 1990):
1. Identification and characterization of earthquake sources. Besides the information
required in step 1 of DSHA, it is necessary to obtain the probability distribution of
potential rupture location within the source and the probability distribution of source
to-site distance;
2. Definition of seismicity, i.e. the temporal distribution of earthquake recurrence
(average rate at which an earthquake of some size will be exceeded);
3. Use predictive (attenuation) relations for computing ground motion parameters
produced at the site by earthquakes of any possible size occurring at any possible point
in each source zone; uncertainty in attenuation relations is considered in PSHA;
4. Uncertainties in earthquake location, size and ground motion prediction are combined
and the outcome is the probability that ground motion parameter will be exceeded
during a particular time period.
The steps are represented in Figure 4.2.

Source 1
M
R
Source 3

f
Site
R
f
R
f
Parameter value, y*
Source 2

Step 1
Step 2
Step 3 Step 4
Magnitude
Average
rate,

2
3
1
Distance, R
Ground
Motion
Parameter

P(Y>y*)

Figure 4.2. Steps in PSHA, (Kramer, 1996)

Structural Reliability and Risk Analysis Lecture Notes

39
4.3. Earthquake source characterization
The spatial uncertainty of earthquake location is taken into account in PSHA. The earthquakes
are usually assumed to be uniformly distributed within a particular source. The uniform
distribution in the source zone does not often translate into a uniform distribution of source-
to-site distance.
Another important source of uncertainties is given by the size of the earthquake and by the
temporal occurrence of earthquakes. The recurrence law gives the distribution of earthquake
sizes in a given period of time. Gutenberg & Richter (1944) organized the seismic data in
California according to the number of earthquakes that exceeded different magnitudes during
a time period. The key parameter in Gutenberg & Richters work was the mean annual rate of
exceedance,
M
of an earthquake of magnitude M which is equal to the number of
exceedances of magnitude M divided by the length of the period of time. The Gutenberg &
Richter law is (Figure 4.3):
lg
M
= a - b M (4.1)
where
M
- mean annual rate of earthquakes exceeding magnitude M,
M - magnitude,
a and b numerical coefficients depending on the data set.











Figure 4.3. The Gutenberg-Richter law

The physical meaning of a and b coefficients can be explained as follows:
10
a
mean yearly number of earthquakes of magnitude greater than or equal to 0.
b describes the relative likelihood of large to small earthquakes. If b increases the
number of larger magnitude earthquakes decreases compared to those of smaller earthquakes
(b is the slope of the recurrence plot).
0 M
lg
M

10
a
b
Structural Reliability and Risk Analysis Lecture Notes

40
The a and b coefficients are obtained through regression on a database of seismicity
(earthquake catalogues) from the source zone of interest or through maximum likelihood
method. The earthquake catalogues contains dependent events (foreshocks, aftershocks) that
must be removed form the seismicity database because PSHA is intended to evaluate the
hazard form discrete, independent releases of seismic energy.
The original Gutenberg & Richter law (4.1) is unbounded in magnitude terms. This leads to
unreliable results especially at the higher end of the magnitude scale. In order to avoid this
inconsistency, the bounded recurrence law is used. The bounded law is obtained and defined
hereinafter.
The Gutenberg & Richter law may be reshaped as follows:
lg
M
=
10 ln
ln
M

= a - b M (4.2)
ln
M
= a ln10 - b ln10 M = - M (4.3)

M
=
M
e

(4.4)
where = a ln10 = 2.303 a and = b ln10 = 2.303 b.
The form (4.4) of Gutenberg & Richter law shows that the magnitudes follow an exponential
distribution.
If the earthquakes smaller than a lower threshold magnitude M
min
are eliminated, one gets
(McGuire and Arabasz, 1990):
F
M
(M) = P[Mag. M | M M
min
] = 1 - P[Mag. > M | M M
min
] =
=
min
M
M
1

= 1 -
min
M
M
e
e

= 1 -
) M M (
min
e

(4.5)
f
M
(M) =
dM
M dF
M
) (
=
min) M M (
e

. (4.6)

Mmin
is the mean annual rate of earthquakes of magnitude M larger or equal than M
min
.
If both a lower threshold magnitude M
min
and a higher threshold magnitude M
max
are taken
into account, the probabilistic distribution of magnitudes can be obtained as follows (McGuire
and Arabasz, 1990).
The cumulative distribution function must have the unity value for M = M
max
. This yields:
) (M F
M
= P[Mag. M| M
min
M M
max
] =
) (
) (
max
M F
M F
M
M
=
) (
) (
min max
min
1
1
M M
M M
e
e

(4.7)
) (M f
M
=
dM
M dF
M
) (
=
) (
) (
min max
min
1
M M
M M
e
e

. (4.8)
The mean annual rate of exceedanceof an earthquake of magnitude M is:
Structural Reliability and Risk Analysis Lecture Notes

41
)] ( 1 [
min
M F
M M M
= =
) M M (
) M M ( ) M M (
M
min max
min max min
min
e 1
e e


(4.9)
where
min
M
=
min
M
e

is the mean annual rate of earthquakes of magnitude M larger or
equal than M
min
. Finally one gets (McGuire and Arabasz, 1990):
M
=
min
M
e

) M M (
) M M ( ) M M (
min max
min max min
e 1
e e


=
=
min
M
e

) M M (
) M M ( ) M M (
min max
max min
e 1
] e 1 [ e


=
=
M
e

) M M (
) M M (
min max
max
e 1
e 1

. (4.10)

4.4. Predictive relationships (attenuation relations)
The predictive relationships usually take the form Y =f(M, R, P
i
), where Y is a ground motion
parameter, M is the magnitude of the earthquake, R is the source-to-site distance and P
i
are
other parameters taking into account the earthquake source, wave propagation path and site
conditions. The predictive relationships are used to determine the value of a ground motion
parameter for a site given the occurrence of an earthquake of certain magnitude at a given
distance. The coefficients of the predictive relationships are obtained through multi-variate
regression on a particular set of strong motion parameter data for a given seismic source. The
uncertainty in evaluation of the ground motion parameters is incorporated in predictive
relationships through the standard deviation of the logarithm of the predicted parameter. One
can compute the probability that ground motion parameter Y exceeds a certain value, y* for an
earthquake of magnitude, m at a given distance r (Figure 4.4):
( ) ( ) ( ) * 1 , | * 1 , | * y F r m y Y P r m y Y P
Y
= = > (4.11)
where F is the CDF of ground motion parameter, usually assumed lognormal.

Y
R
y*
P(Y>y*|m,r)
r
f
Y
(y|m,r)

Figure 4.4. Incorporation of uncertainties in the predictive relationships
Structural Reliability and Risk Analysis Lecture Notes

42
4.5. Temporal uncertainty
The distribution of earthquake occurrence with respect to time is considered to have a random
character. The temporal occurrence of earthquakes is considered to follow, in most cases, a
Poisson model, the values of the random variable of interest describing the number of
occurrences of a particular event during a given time interval.
The properties of Poisson process are:
the number of occurrences in one time interval is independent of the number of
occurrences in any other time interval;
the probability of occurrence during a very short time interval is proportional to the
length of the time interval;
the probability of more than one occurrence during a very short time interval is
negligible.
If N is the number of occurrences of a particular event during a given time interval, the
probability of having n occurrences in that time interval is:
| |
! n
e
n N P
n

= = (4.12)
where is the average number of occurrences of the event in that time interval.

4.6. Probability computations
The results of the PSHA are given as seismic hazard curves quantifying the annual probability
(or mean annual frequency) of exceedance of different values of selected ground motion
parameter.
The probability of exceedance a particular value, y* of a ground motion parameter Y is
calculated for one possible earthquake at one possible source location and then multiplied by
the probability that that particular magnitude earthquake would occur at that particular
location. The process is then repeated for all possible magnitudes and locations with the
probability of each summed, (Kramer, 1996):
( ) ( ) ( ) ( ) ( )

> = > = > dx x f X y Y P X P X y Y P y Y P


X
| * | * * (4.13)
where X is a vector of random variables that influence Y (usually magnitude, M and source-to-
site distance, R). Assuming M and R independent, for a given earthquake recurrence, the
probability of exceeding a particular value, y*, is calculated using the total probability
theorem (Cornell, 1968, Kramer, 1996):


> = > dmdr r f m f r m y Y P y Y P
R M
) ( ) ( ) , | * ( *) ( (4.14)
where:
- P(Y>y*|m,r) probability of exceedance of y* given the occurrence of an earthquake of
magnitude m at source to site distance r.
Structural Reliability and Risk Analysis Lecture Notes

43
- f
M
(m) probability density function for magnitude;
- f
R
(r) probability density function for source to site distance.
For a given earthquake recurrence, the mean annual rate of exceedance of a particular value of
peak ground acceleration, PGA*, is calculated using the total probability theorem (Cornell,
1968, Kramer, 1996):

> = > dmdr r f m f r m PGA PGA P PGA PGA
R M M
) ( ) ( ) , | * ( *) (
min
(4.15)
where:
(PGA>PGA*) mean annual rate of exceedance of PGA* ;
-
Mmin
is the mean annual rate of earthquakes of magnitude M larger or equal than M
min
;
- P(PGA>PGA*|m,r) probability of exceedance of PGA* given the occurrence of an
earthquake of magnitude m at source to site distance r.
The mean annual rate of exceedance of PGA the hazard curve - for Bucharest site and
Vrancea seismic source is represented in Figure 4.5.
PSHA - Bucharest
1.E-04
1.E-03
1.E-02
1.E-01
100 150 200 250 300 350 400
PGA , cm/s
2
A
n
n
u
a
l

e
x
c
e
e
d
a
n
c
e

r
a
t
e
,

(
P
G
A
)

Figure 4.5. Hazard curve for Bucharest from Vrancea seismic source
Structural Reliability and Risk Analysis Lecture Notes

44
5. INTRODUCTION TO RANDOM PROCESSES THEORY

5.1. Background
Physical phenomena of common interest in engineering are usually measured in terms of
amplitude versus time function, referred to as a time history record. There are certain types of
physical phenomena where specific time history records of future measurements can be
predicted with reasonable accuracy based on ones knowledge of physics and/or prior
observations of experimental results, for example, the force generated by an unbalanced
rotating wheel, the position of a satellite in orbit about the earth, the response of a structure to
a step load. Such phenomena are referred to as deterministic, and methods for analyzing their
time history records are well known. Many physical phenomena of engineering interest,
however, are not deterministic, that is, each experiment produces a unique time history record
which is not likely to be repeated and cannot be accurately predicted in detail. Such processes
and the physical phenomena they represent are called random or stochastic. In this case a
single record is not as meaningful as a statistical description of the totality of possible
records.
As just mentioned, a physical phenomenon and the data representing it are considered random
when a future time history record from an experiment cannot be predicted with reasonable
experimental error. In such cases, the resulting time history from a given experiment
represents only one physical realization of what might have occurred. To fully understand the
data, one should conceptually think in terms of all the time history records that could have
occurred, as illustrated in Figure 5.1.






The ensemble given by all ) (t x
i
defines
the stochastic process while
) ( ),...., ( ), (
2 1
t x t x t x
n
are the samples.





Figure 5.1. Ensemble of time history records defining a random process
) (
1
t x
) (t x
i
) (t x
n
) (
1 j
t x
) (
j i
t x
) (
j n
t x
t
t
t
j
t
j
t
j
t
Structural Reliability and Risk Analysis Lecture Notes

45
This collection of all time history records x
i
(t), i=1, 2, 3, , which might have been produced
by the experiment, is called the ensemble that defines a random process {x(t)} describing the
phenomenon. Any single individual time history belonging to the ensemble is called a sample
function. Each sample function is sketched as a function of time. The time interval involved is
the same for each sample. There is a continuous infinity of different possible sample
functions, of which only a few are shown. All of these represent possible outcomes of
experiments which the experimenter considers to be performed under identical conditions.
Because of variables beyond his control the samples are actually different. Some samples are
more probable than others and to describe the random process further it is necessary to give
probability information.

5.2. Average properties for describing internal structure of a random process
If one makes a section at
j
t one obtains ) (
j i
t x that are the values of a random variable. Given
an ensemble of time history records {x(t)} describing a phenomenon of interest, the average
properties of the data can be readily computed at any specific time t
j
by averaging over the
ensemble (theoretically for n).
The statistical indicators are:
the mean:
n
t x
t m
n
i
j i
j x

=
=
1
) (
) ( ; (5.1) ) (t m
x
- deterministic function of time
the mean square value ) (
2
t x ;
n
t x
t x
n
i
j i
j

=
=
1
2
2
) (
) ( (5.2)
the variance ) (
2
j x
t ;
| |
n
t m t x
t
n
i
j x j
j x

=

=
1
2
2
) ( ) (
) ( (5.3) ) ( ) (
2
j x j x
t t =

Considering two different random processes ) (t x and ) ( * t x one may get the same mean and
the same variance for both processes (Figure 5.2).
Obviously, the mean and the variance cannot describe completely the internal structure of a
random (stochastic) process and additional higher order average values are needed for a
complete description.
The average product of the data values at times t
j
and t
k
called the correlation function is
given by:
n
t x t x
t t R
n
i
k i j i
k j x

=

=
1
) ( ) (
) , ( (5.4)

time dependent

Structural Reliability and Risk Analysis Lecture Notes

46


) (t
x

) ( ) ( t t m
x x
+
) ( ) ( t t m
x x

) (t m
x
) (
*
t
x

) ( ) (
* *
t t m
x x
+
) ( ) (
* *
t t m
x x
) (
*
t m
x

=
=
) ( ) (
) ( ) (
*
*
t t
t m t m
x
x
x
x



but ) (t x and ) (
*
t x are different in


Figure 5.2. Two different processes with same mean and same variance

Furthermore, the average product of the data values at times t and t+ is called the
autocorrelation at time delay
n
t x t x
t t R
n
i
i i
x

=
+
= +
1
) ( ) (
) , (

(5.5)
If ) ( ) , ( 0
2
t x t t R
x
= = . (5.6)
The covariance function is defined by:
| || |
n
t m t x t m t x
t t K
n
i
k x k i j x j i
k j x

=

=
1
) ( ) ( ) ( ) (
) , ( (5.7)
An important property of the variance function is easily obtained by developing the right hand
side member of Equation (5.7):
| |
) ( ) (
) (
) (
) (
) (
) ( ) (
) , (
1 1 1
k x j x
n
i
k i
j x
n
i
j i
k x
n
i
k i j i
k j x
t m t m
n
t x
t m
n
t x
t m
n
t x t x
t t K +

=

= = =

) ( ) ( ) , ( ) , (
k x j x k j x k j x
t m t m t t R t t K = (5.8)
If = = t t t
k j
) ( ) ( ) (
2 2 2
t m t x t
x x
= . (5.9)

Structural Reliability and Risk Analysis Lecture Notes

47
5.3. Main simplifying assumptions
The following assumptions greatly reduce the computational effort and enable the
development of analytical solutions in random processes theory.

Stationarity: A random process is said to be stationary if its probability distributions are
invariant under a shift of the time scale; i.e., the family of probability densities applicable now
also applies 10 minutes from now or 3 weeks from now. This implies that all the averages are
constants independent of time. Then, the autocorrelation function depends only of the time lag
between t
j
and t
k
.
For stationary data, the average values at all times can be computed from appropriate
ensemble averages at a single time t (Figure 5.3).

Figure 5.3. Statistical properties of stationary random processes

It is possible to partially verify the stationary assumption experimentally by obtaining a large
family of sample functions and then calculating averages such as the mean and
autocorrelation for many different times. If the stationary hypothesis is warranted there should
be substantial agreement among the results at different times.
For a process to be strictly stationary it can have no beginning and no end. Each sample must
extend from t =- to t =-. Real processes do in fact start and stop and thus cannot be truly
stationary. The nonstationary effects associated with starting and stopping are often neglected
in practice if the period of stationary operation is long compared with the starting and
stopping intervals. If changes in the statistical properties of a process occur slowly with time,
it is sometimes possible to subdivide the process in time into several processes of shorted
duration, each of which may be considered as reasonably stationary.

Ergodicity: All the averages discussed so far have been ensemble averages. Given a single
sample x(t) of duration T it is, however, possible to obtain averages by averaging with respect
time along the sample. Such an average is called a temporal average in contrast to the
ensemble averages described previously.
Within the subclass of stationary random processes there exists a further subclass known as
ergodic processes. An ergodic process is one for which ensemble averages are equal to the
corresponding temporal averages taken along any representative sample function.
Structural Reliability and Risk Analysis Lecture Notes

48
For almost all stationary data, the average values computed over the ensemble at time t will
equal the corresponding average values computed over time form a single time history record
(theoretically, infinitely long - T - practically, long enough). For example, the average
values may be as:

=
T
x
dt t x
T
m
0
) (
1
(5.10)

=
T
dt t x
T
x
0
2 2
) (
1
(5.11)
| |

=
T
x x
dt m t x
T
0
2 2
) (
1
| |

+ =
T
x x x
dt m t x m t x
T
0
2 2 2
) ( 2 ) (
1



2
0
2
2
0
2
0
2 2
2
) ( 2 ) (
1
x
T
x
x
T
x
T
x
m
dt
T
m
m
dt t x
T
m
x
dt t x
T

+ =
2 2 2
x x
m x = (5.12)
where x(t) is any arbitrary record from the ensemble {x(t)}.
The autocorrelation function becomes (Figure 5.4):

+ =
T
x
dt t x t x
T
R
0
) ( ) (
1
) ( (5.13)

) (
x
R


Figure 5.4. Autocorrelation function for a stationary random process

For
2
) 0 ( 0 x R
x
= = (5.14)
The autocorrelation function is a non-increasing symmetric function with respect to the time
origin:

2
) 0 ( ) (
) ( ) (
x R R
R R
x x
x x
=
=


(5.15)
Sometimes one uses instead of ) (
x
R ,
2
) (
) (
x
R
x
x

= (5.16)
Structural Reliability and Risk Analysis Lecture Notes

49
that is the normalized autocorrelation function ( 1 ) 0 ( =
x
).
Auto covariance function is defined as:
| || |

+ =
T
x x x
dt m t x m t x
T
K
0
) ( ) (
1
) ( (5.17)
Developing (5.17) one gets:


+ + =
T
x
T
x
T
x
x x
dt
T
m
dt t x
T
m
dt t x
T
m
R K
0
2
0 0
) ( ) ( ) ( ) (

2
) ( ) (
x x x
m R K = (5.18)
If

=
=
=
2 2 2
2
) 0 (
0
x x
x x
m x
K


(5.19)
For stationary data, the properties computed from time averages over individual records of the
ensemble will be the same from one record to the next and will equal the corresponding
properties computed from an ensemble average over the records at any time t if (Papoulis,
1991):
( ) 0
1
2

d R
T
T
T
x xx
as T. (5.20)
In practice, violations of Equation 5.20 are usually associated with the presence of periodic
components in the data.
In conclusion, for stationary ergodic processes, the ensemble is replaced by one representative
function, thus the ensemble averages being replaced by temporal averages, Figure 5.5.
An ergodic process is necessarily stationary since the temporal average is a constant while the
ensemble average is generally a function of time at which the ensemble average is performed
except in the case of stationary processes. A random process can, however, be stationary
without being ergodic. Each sample of an ergodic process must be completely representative
for the entire process.
It is possible to verify experimentally whether a particular process is or is not ergodic by
processing a large number of samples, but this is a very time-consuming task. On the other
hand, a great simplification results if it can be assumed ahead of time that a particular process
is ergodic. All statistical information can be obtained from a single sufficiently long sample.
In situations where statistical estimates are desired but only one sample of a stationary process
is available, it is common practice to proceed on the assumption that the process is ergodic.
Zero mean: It is very convenient to perform the computations for zero mean random
processes. Even if the stationary random process is not zero mean, one can perform a
translation of the time axis with the magnitude of the mean value of the process (Figure 5.6).
In the particular case of zero mean random processes, the following relations hold true:
Structural Reliability and Risk Analysis Lecture Notes

50

2
0
2 2 2
) 0 (
x x x x
m x R = + = = (5.21)

) ( ) ( ) (
0
2

x x x x
R m R K = = (5.22)


t
j
t
j
t
j
t
t
t
) (
1
t x
) (t x
i
) (t x
n
representative sample
x
m ) (t x
t

Figure 5.5. Representative sample for a stationary ergodic random process



x
m
) (t x
) (t x
t
t
0 =
x
m
Translate the time axis
Zero mean process

Figure 5.6. Zero mean stationary ergodic random process

Structural Reliability and Risk Analysis Lecture Notes

51
Normality (Gaussian): Before stating the normality assumption, some considerations will be
given to probability distribution of stochastic processes.

5.4. Probability distribution
Referring again to the ensemble of measurements, assume there is a special interest in a
measured value at some time t
1
that is units or less, that is, x(t
1
) . It follows that the
probability of this occurrence is
( ) | |
( ) | |
N
t x N
t x
N


=

1
1
lim Prob (5.23)
where N[x(t
1
) ] is the number of measurements with an amplitude less than or equal to at
time t
1
. The probability statement in Equation 5.23 can be generalized by letting the amplitude
take on arbitrary values, as illustrated in Figure 5.7. The resulting function of is the
probability cumulative distribution function, CDF of the random process {x(t)} at time t
1
, and
is given by
( ) ( ) | | =
1 1 x
Prob t x x,t F (5.24)

0 t
x
1
(t)

t
1
0 t
x
2
(t)

t
1

0 t
x
N
(t)
t
1
P(x, t
1
)
P(, t
1
)
x

Figure 5.7. General probability distribution function
Structural Reliability and Risk Analysis Lecture Notes

52
The probability distribution function then defines the probability that the instantaneous value
of {x(t)} from a future experiment at time t
1
will be less than or equal to the amplitude of
interest. For the general case of nonstationary data, this probability will vary with the time t
1
.
For the special case of stationary ergodic data, the probability distribution function will be the
same at all times and can be determined from a single measurement x(t) by
( ) ( ) | |
( ) | |
T
t x T
t x x F
T
X


= =

lim Prob (5.25)
where T[x(t) ] is the total time that x(t) is less than or equal to the amplitude , as illustrated
in Figure 5.8.
In this case, the probability distribution function defines the probability that the instantaneous
value of x(t) from a future experiment at an arbitrary time will be less than or equal to any
amplitude of interest.


Figure 5.8. Probability distribution function of stationary data

There is a very special theoretical process with a number of remarkable properties one of the
most remarkable being that in the stationary case the spectral density (or autocorrelation
function) does provide enough information to construct the probability distributions. This
special random process is called the normal random process.
Many random processes in nature which play the role of excitations to vibratory systems are
at least approximately normal. The normal (Gaussian) assumption implies that the probability
distribution of all ordinates of the stochastic process follows a normal (Gaussian) distribution
(Figure 5.9).
A very important property of the normal or Gaussian process is its behavior with respect to
linear systems. When the excitation of a linear system is a normal process the response will be
in general a very different random process but it will still be normal.
Structural Reliability and Risk Analysis Lecture Notes

53

) ) ( ( dx x t X x P dx f
X
+ < =
Normal distribution
x(t)

Figure 5.9. Normal zero mean stationary ergodic stochastic process

5.5. Other practical considerations
The number of time history records that might be available for analysis by ensemble
averaging procedures, or the length of a given sample record available for analysis by time
averaging procedures, will always be finite; that is, the limiting operations n and T
can never be realized in practice. It follows that the average values of the data can only be
estimated and never computed exactly.
In most laboratory experiments, one can usually force the results of the experiment to be
stationary by simply maintaining constant experimental conditions. In many field experiments
there is no difficulty in performing the experiments under constant conditions to obtain
stationary data. There are, however, some exceptions. A class of exceptions is where the basic
parameters of the mechanisms producing the data are acts of nature that cannot be controlled
by the experimenter. Examples include time history data for seismic ground motions,
atmospheric gust velocities and ocean wave heights. In these cases, one cannot even design
repeated experiments that would produce a meaningful ensemble. You simply take what you
get. The usual approach in analyzing such data is to select from the available records quasi-
stationary segments that are sufficiently long to provide statistically meaningful results for the
existing conditions.
Structural Reliability and Risk Analysis Lecture Notes

54
6. POWER SPECTRAL DENSITY OF STATIONARY RANDOM FUNCTIONS

6.1. Background and definitions
One recalls that for stationary processes the autocorrelation function R() is a function of the
time lag =t
2
t
1
. A frequency decomposition of R() can be made in the following way

+

=

d e S R
r i
x x
) ( ) ( (6.1)
where S
x
() is essentially (except for the factor 2) the Fourier transform of R()


d e R S
i
x x
) (
2
1
) ( (6.2)
S
x
() is a non-negative, even function of .
A physical meaning can be given to S
x
() by considering the limiting case of Equation 6.1 in
which =0

+

= = d S x R
x x
) ( ) 0 (
2
(6.3)
The mean square of the process equals the sum over all frequencies of S
x
()d so that S
x
()
can be interpreted as mean square spectral density (or power spectral density, PSD). If the
process has zero mean, the mean square of the process is equal to the variance of the process

2
0
2 2 2
) ( ) 0 (
x x x x x
m x d S R = + = = =

+

(6.4)
The Equations 6.1 and 6.2 used to define the spectral density are usually called the Wiener
Khintchine relations and point out that ) ( ), (
x x
S R are a pair of Fourier transforms.
Note that the dimensions of S
x
() are mean square per unit of circular frequency. Note also
that according to Equation 6.3 both negative and positive frequencies are counted.
It is like a random process is decomposed in a sum of harmonic oscillations of random
amplitudes at different frequencies and the variance of random amplitudes is plotted against
frequencies. Thus the power spectral density (PSD) is obtained (Figure 6.1). The power
spectral density function gives the image of the frequency content of the stochastic process.
As one can notice from Equation 6.3 and from Figure 6.1, the area enclosed by the PSD
function in the range [
1
,
2
] represents the variance of the amplitudes of the process in that
particular range. The predominant frequency of the process,
p
, indicates the frequency
around whom most of the power of the process is concentrated.

Structural Reliability and Risk Analysis Lecture Notes

55

) (
x
S

p

p
0

Figure 6.1 Power spectral density function

The PSD so defined is convenient in analytical investigations. In experimental work a
different unit is widely used. The differences arise owing to the use of cycles per unit time in
place of radians per unit time and owing to counting only positive frequencies. The
experimental spectral density will be denoted by G
x
( f ) where f is frequency in Hz. The
relation between S
x
() and G
x
( f ) is simply
( ) ( )
x x
S f G 4 = (6.5)
The factor 4 is made up of a factor of 2 accounting for the change in frequency units and a
factor of 2 accounting for the consideration of positive frequencies only, instead of both
positive and negative frequencies for an even function of frequency. Instead of relation (6.3)
one has
( )

+ + + +

+
= = = = = =
0 0 0 0
2
) ( 2 ) ( 2 ) ( 2 ) ( ) ( 0 df f G df S d S d G d S R
x x x x x x

(6.6)
for the relation between the variance of the zero-mean process and the experimental PSD.

) (
x
G
2
x


) (
x
S - bilateral PSD
) (
x
G - unilateral PSD
f 2 =
) ( f g
x
=normalized unilateral PSD
2
) (
) (
x
x
x
f G
f g

=

Structural Reliability and Risk Analysis Lecture Notes

56
f
) ( 2 ) (
x x
G f G =
2
x

1
) ( f g
x


Figure 6.2. Various representations of PSD

The PSD is the Fourier transform of a temporal average (the autocorrelation function) and
plays the role of a density distribution of the variance along the frequency axis.
For real processes the autocorrelation function is an even function of and that it assumes its
peak value at the origin. If the process contains no periodic components then the
autocorrelation function approaches zero as . If the process has a sinusoidal component
of frequency
0
then the autocorrelation function will have a sinusoidal component of the
same frequency as .
If the process contains no periodic components then PSD is finite everywhere. If, however,
the process has a sinusoidal component of frequency
0
then the sinusoid contributes a finite
amount to the total mean square and the PSD must be infinite at =
0
; i.e., the spectrum has
an infinitely high peak of zero width enclosing a finite area at
0
. In particular, if the process
does not have zero mean then there is a finite zero frequency component and the spectrum
will have a spike at =0.
For practical computations, Equations 6.1 and 6.2 become:

+

= d S R
x x
cos ) ( ) ( (6.7)

+

=

d R S
x x
cos ) (
2
1
) ( (6.8)
if one uses
t t e
t i

sin cos =


Structural Reliability and Risk Analysis Lecture Notes

57
and keeps just the real parts of the integrands in relations (6.1) and (6.2).

6.2. Properties of first and second time derivatives
Given a stationary random process x(t), then the random processes ( ) ( ) | | t x
dt
d
t x =
.
, each of
whose sample functions is the temporal derivative of the corresponding sample x(t), and
( ) ( ) | | t x
dt
d
t x
2
2
..
= , each of whose sample functions is the temporal second derivative of the
corresponding sample x(t), are also stationary random processes. Furthermore, the following
relations hold true:

| | ) ( ) ( ) (
2
2


x x x
R
d
d
R R = =

(6.9)
| | ) ( ) ( ) (
4
4


x x x
R
d
d
R R = =

(6.10)
Using Equation 6.3 in 6.9 and 6.10, for a zero mean process is follows:

( )
( )

= = =
= = =


+

+

+

+

0 ' ' ' ' ) ( ) (
0 ' ' ) ( ) (
4 2
2 2
R d S d S
R d S d S
x x
x
x x x



(6.11)

6.3. Frequency content indicators
The frequency content of the input random process is the most important feature for the
understanding of the output structural response. The maximum values of structural response
are when the structure frequency and the major part of the frequency content of input process
falls in the same frequency band.
The frequency content can be described by the power spectral density function (PSD),
obtained from stochastic modelling of the random process.
The stochastic measures of frequency content are related to the power spectral density
function of stationary segment of the random process. The frequency content indicators are:
i. the dimensionless indicators q and ;
ii. the Kennedy Shinozuka indicators f
10
, f
50
and f
90
which are fractile frequencies
below which 10%, 50% and 90% of the total cumulative power of PSD occur;
iii. the frequencies f
1
, f
2
and f
3
corresponding to the highest 1,2,3 peaks of the PSD.
To define the frequency content indicators, one has to introduce first the spectral moment of
order i:
Structural Reliability and Risk Analysis Lecture Notes

58

+

= d G
x
i
i
) (
(6.12)
It follows from relations (6.4), (6.11) and (6.12) that:

2
0
) (
x x
d G = =

+

(6.13)

+

= =
2 2
2
) (
x x
d G


(6.14)

+

= =
2 4
4
) (
x x
d G


(6.15)
The spectral moments are computed using unilateral PSD; otherwise the uneven spectral
moments would be equal to zero if bilateral PSD is to be used in the calculations.
The Cartwright & Longuet-Higgins indicator is:

4 0
2
2
1

= (6.16)
Wide frequency band processes have values close to 2/3 and smaller than 0.85. Narrow
frequency band seismic-processes of long predominant period (i.e. superposition of a single
harmonic process at a short predominant frequency, f
p
and a wide band process) are
characterized by values greater than 0.95. The processes with values in between 0.85 and
0.95 have intermediate frequency band.
The relation for computing q indicator is:
2 0
2
1
1

= q (6.17)
The Kennedy-Shinozuka indicators are
90 50 10
, , f f f are defined as:
1 . 0 ) (
10
0
=

f
x
df f g (6.18)
5 . 0 ) (
50
0
=

f
x
df f g (6.19)
9 . 0 ) (
90
0
=

f
x
df f g (6.20)
The difference
10 90
f f gives an indication on the bandwidth of the process.
The physical meaning of the above indicators f
10
, f
50
and f
90
is that the area enclosed by the
normalized unilateral spectral density situated to the left of f
10
, f
50
and f
90
is equal to 10%,
50% and 90% of the total area.
Structural Reliability and Risk Analysis Lecture Notes

59

) ( f g
x
f
10
f
50
f
90
f
1

Figure 6.3. Kennedy Shinozuka indicators

6.4. Wide-band and narrow-band random process
The PSD of a stationary random process constitutes a partial description of the process. Here
this description is examined for two extreme cases: a wide-band spectrum and a narrow-band
spectrum.

6.4.1. Wide-band processes. White noise
A wide-band process is a stationary random process whose PSD has significant values over a
band or range of frequencies which is of roughly the same order of magnitude as the center
frequency of the band. A wide range of frequencies appears in representative sample
functions of such a process. An example of a wide-band spectrum is displayed in Figure 6.4.

) (
x
S


Figure 6.4. Example of wide band spectrum

In analytical investigations a common idealization for the spectrum of a wide-band process is
the assumption of a uniformPSD, S
0
as shown in Figure 6.5. A process with such a spectrum
is called white noise in analogy with white light which spans the visible spectrum more or less
uniformly. Ideal white noise is supposed to have a uniform density over all frequencies. This
is a physically unrealizable concept since the mean square value (equal to the variance for
zero-mean processes) of such a process would be infinite because there is an infinite area
Structural Reliability and Risk Analysis Lecture Notes

60
under the spectrum. Nevertheless, the ideal white noise model can sometimes be used to
provide physically meaningful results in a simple manner.

) (
x
S

0
S

Figure 6.5. Theoretical white noise ( + < < ) and
2
x


The band-limited white noise spectrum shown in Figure 6.6 is a close approximation of many
physically realizable random processes.

) (
x
S

2
x

0
S

Figure 6.6. Band-limited white noise, ) ( 2
1 2 0
2
= S
x


For band-limited white noise the autocorrelation function is, Figure 6.7:
( ) ( ) = = = =

+ +


d w S d S d e S R
x
i
x x
0
0
2
1
cos 2 cos ) ( 2 ) ( ) (
| |

1 2
0 0
sin sin 2 sin 2
2
1
= =
S S
(6.21)
For a wide band process the autocorrelation function is vanishing quickly (~2cycles); there is
a finite variance ) ( 2
1 2 0
2
= S
x
and nonzero correlation with the past and the future, at
least for short intervals.
Structural Reliability and Risk Analysis Lecture Notes

61

) (
x
R

Figure 6.7. Autocorrelation function corresponding to the band-limited white noise spectra

6.4.2. Narrow band processes
A narrow-band process is a stationary random process whose PSD has significant values only
in a band or range of frequencies whose width is small compared with the magnitude of the
center frequency of the band, Figure 6.8. Only a narrow range of frequencies appears in the
representative samples of such a process. Narrow-band processes are typically encountered as
response variables in strongly resonant vibratory systems when the excitation variables are
wide-band processes.


) (
x
S


Figure 6.8. Narrow band spectrum

If it can be assumed that the process is normal then it is possible to compute the average or
expected frequency of the cycles. When a stationary normal random process with zero mean
has a narrow-band spectrum, the statistical average frequency or expected frequency is
0

where:

( )
( )
( )
( )
2
2
2
2
0
.
0
0 ' '
x
x
x
x
R
R
d S
d S



= = =


(6.22)
This result is simply stated and has a simple interpretation; i.e.,
0
2
is simply a weighted
average of
2
in which the PSD is the weighting function. The establishment of this result, by
S. O. Rice represented a major advance in the theory of the random processes.
Structural Reliability and Risk Analysis Lecture Notes

62
If one assumes that x(t) is a stationary normal zero-mean process, one finds the expected
number of crossing the level x=a with positive slope:

|
|
.
|

\
|
=
+
2
2
2
exp
2
1
.
x x
x
a
a

(6.23)
If one sets a =0 in relation (6.23), the expected frequency, in crossings per unit time, of zero
crossings with positive slope is obtained. Finally if the process narrow-band, the probability is
very high that each such crossing implies a complete cycle and thus the expected frequency,
in cycles per unit time, is
0
+
. It should be emphasized that this result is restricted to normal
processes with zero mean.
For band-limited white noise, Figure 6.9, the autocorrelation function is:
( ) ( ) = = = =

+ + +


d w S d S d e S R
x
i
x x
0
0
1
1
cos 2 cos ) ( 2 ) ( ) (
| |

1 1
0 0
sin ) sin( 2 sin 2
1
1
+ = =
+ S S
(6.24)

) (
x
S

0
S
+
1 1


Figure 6.9. Band-limited white noise, =
0
2
2S
x


For narrow-band processes, the autocorrelation function is periodically and is vanishing
slowly, Figure 6.10.


) (
x
R


Figure 6.10. Autocorrelation function of narrow-band processes
Structural Reliability and Risk Analysis Lecture Notes

63
6.5. Note on the values of frequency content indicators
If one considers a random process with constant power spectral density G
0
within the range
[
1
,
2
], the spectral moment of order i is:
( )
( )
1
1
2
0
1
1
1
2
0
1
0
0 0
1
1 1 1
2
1
+
+ + + +


+
=
+

=
+
= =

i
i i i i
i
i
i
G
i
G
i
G d G


(6.25)
where:
2
1

= .
Using relation (6.25) in the definition of q indicator, one gets:
( )
( ) ( )
( )
( )
2
2
3 3
2
0
2 0
2
2 2
2
0
2 0
2
1
1
1
4
3
1
1
3
1
1
2
1 1






+ +
+
=

|
.
|

\
|

=

=
G
G
G
q (6.26)
One can notice that:
- for 0 (ideal white noise), q1/2;
- for =1 (pure sinusoid - harmonic), q=0;
- since the q indicator is a measure of the scattering of the PSD values with respect to
the central frequency, the q value increases as the frequency band of the PSD
increases;
- for narrow band random process, the q values are in the range [0, 0.25];
- for wide band random process, the q values are in the range (0.25, 0.50].
The variation of q values with respect to is represented in Figure 6.11.
Using relation (6.25) in the definition of indicator, one gets:
( )
( ) ( )
( )
( ) ( ) 1 1 1
1
9
5
1
1
5
1
1
3
1 1
2
2
2
5 5
2
0
2 0
2
3 3
2
0
4 0
2
2
+ + +
+ +
=

|
.
|

\
|

=

G
G
G
(6.27)
One can notice that:
- for 0 (ideal white noise), 2/3;
- for =1 (pure sinusoid - harmonic), =0;
- for wide band random signals, the values are in the vicinity of 2/3.
The variation of values with respect to is represented in Figure 6.12.

Structural Reliability and Risk Analysis Lecture Notes

64
0.0
0.1
0.2
0.3
0.4
0.5
0.0 0.2 0.4 0.6 0.8 1.0
=
1
/
2
q

0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.0 0.2 0.4 0.6 0.8 1.0
=
1
/
2


Figure 6.11. Variation of q values with
respect to
Figure 6.12. Variation of values with
respect to

Let us consider a random process expressed as a sum of two random signals, one with a PSD
characterized by a constant value G
A
within a narrow frequency band in the range [
1
,
2
]
and one with a PSD characterized by a constant value G
B
within a wide frequency band in the
range [
2
,
3
], Figure 6.13.

G( )


Figure 6.13. PSD of a random process situation 1

The spectral moment of order i is:
( )
( ) ( )
iB iA
i
B
i
B
i
A
i
A
i
B
i
A
i
B
i
A
i
i
i
G
i
G
i
G
i
G d G d G d G

+ =
+
+
+

=
+
+
+
= + = =
+
+
+
+
+ +


1
1
3 1
1
2
1 1
0
1
1
1
1
1 1
3
2
2
1
2
1
3
2
(6.28)

3


G
A
G
B
Structural Reliability and Risk Analysis Lecture Notes

65
where:
2
1

=
A
and
3
2

=
B
.
Using relation (6.28) in the definition of q indicator, one gets:
( )
( ) ( )
( ) ( )
( ) ( ) ( ) ( )
(

+
(

+
(

+
=
=
+ +
+
=

=
3 3 3
2
2 2 2
2 2 0 0
2
1 1
2 0
2
1
1 1 1 1
1 1
4
3
1
1 1
B A
B
B
A
B B A
B
A
B B A
B
A
B A B A
B A
G
G
G
G
G
G
q




(6.29)
Using relation (6.28) in the definition of indicator, one gets:
( )
( ) ( )
( ) ( )
( ) ( ) ( ) ( )
(

+
(

+
(

+
=
=
+ +
+
=

=
5 5 5
2
3 3 3
4 4 0 0
2
2 2
4 0
2
2
1 1 1 1
1 1
9
5
1
1 1
B A
B
B
A
B B A
B
A
B B A
B
A
B A B A
B A
G
G
G
G
G
G




(6.30)
In the following developments, two situations are considered:
- Situation 1, of Figure 6.13, where the narrow band random signal is of high frequency;
- Situation 2, of Figure 6.14, where the narrow band random signal is of low frequency.

G( )


Figure 6.14. PSD of a random process situation 2

1

2

3


G
A
G
B
Structural Reliability and Risk Analysis Lecture Notes

66
In order to investigate the influence of the frequency band on the values of the frequency
content indicators, several cases for
A
and
B
values are considered. For situation 1, the
following case is considered -
A
=0.05 and
B
=0.95. For situation 2, the following case is
considered -
A
=0.95 and
B
=0.05. The values of the frequency content indicators are
presented in the following. The results are represented as a function of the ratio G
A
to G
B
.

A
=0.05;
B
=0.95
0.00
0.05
0.10
0.15
0.20
0.25
0.30
0.35
0.40
0.45
0.50
1.E-04 1.E-03 1.E-02 1.E-01 1.E+00
G
A
/G
B
q
C

A
=0.95;
B
=0.05
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.E+00 1.E+01 1.E+02 1.E+03 1.E+04
G
A
/G
B
q
C

Fig. 6.15 a. Values of q for situation 1 Fig. 6.15 b. Values of q for situation 2

From Figure 6.15 one can notice that:
- for situation 1
- for small G
A
/G
B
values, the wide band random signal has little influence, the value of
q approaching 0, as for a pure sinusoid. As it is expected, the approach towards zero is
more significant as the narrow band random signal has a narrower PSD;
- for large G
A
/G
B
values, the value of q approach 0.50, the value corresponding to a
white noise random signal. The value is only approaching 0.50, but it never reaches
0.50 since it is a band limited white noise random signal;
- a superposition of a wide band random signal with a high frequency narrow band
random signal with strong contrast in terms of PSD ordinates (G
A
/G
B
<<1) will
produce a random signal very close to a pure sinusoid. In other words, only the narrow
band random signal will influence the value of q;
-for situation 2
- for small G
A
/G
B
values, the value of q approach 0.50, the value corresponding to a
white noise random signal. The value is only approaching 0.50, but it never reaches
0.50 since it is a band limited white noise random signal;
- for large G
A
/G
B
values, the value of q is increasing above 0.50, up to a G
A
to G
B
ratio
of 100 for case 1, up to 1000 for case 2 and up to 10000 for case 3. The maxim q value
is 0.60 for case 1, 0.75 for case 2 and 0.85 for case 3;
- a superposition of a wide band random signal with a low frequency narrow band
random signal with strong contrast in terms of PSD ordinates (G
A
/G
B
>>1) will
Structural Reliability and Risk Analysis Lecture Notes

67
produce a random signal with q values larger than 0.50. The increase in q values is
steadier as the PSD of the narrow band random signal is narrower.

A
=0.05;
B
=0.95
0.00
0.10
0.20
0.30
0.40
0.50
0.60
0.70
1.E-04 1.E-03 1.E-02 1.E-01 1.E+00
G
A
/G
B

A
=0.95;
B
=0.05
0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1
1.E+00 1.E+01 1.E+02 1.E+03 1.E+04
G
A
/G
B

C

Fig. 6.16 a. Values of for situation 1 Fig. 6.16 b. Values of for situation 2

From Figure 6.16 one can notice that:
- for situation 1
- for small G
A
/G
B
values, the wide band random signal has little influence, the value of
approaching 0, as for a pure sinusoid. As it is expected, the approach towards zero is
more significant as the narrow band random signal has a narrower PSD;
- for large G
A
/G
B
values, the value of approach 2/3, the value corresponding to a white
noise random signal. The value is only approaching 2/3, but it never reaches 2/3 since
it is a band limited white noise random signal;
- a superposition of a wide band random signal with a high frequency narrow band
random signal with strong contrast in terms of PSD ordinates (G
A
/G
B
<<1) will
produce a random signal very close to a pure sinusoid. In other words, only the narrow
band random signal will influence the value of .
-for situation 2
- for small G
A
/G
B
values, the value of approach 2/3, the value corresponding to a white
noise random signal. The value is only approaching 2/3, but it never reaches 2/3 since
it is a band limited white noise random signal;
- for large G
A
/G
B
values, the value of is increasing above 2/3, up to a G
A
to G
B
ratio of
100 for case 1. For cases 2 and 3 the value is steadily increasing. The maxim value is
0.90 for case 1, 0.95 for case 2 and 0.99 for case 3;
- a superposition of a wide band random signal with a low frequency narrow band
random signal with strong contrast in terms of PSD ordinates (G
A
/G
B
>>1) will
produce a random signal with values larger than 2/3. The increase in values is
steadier and larger as the PSD of the narrow band random signal is narrower.
Structural Reliability and Risk Analysis Lecture Notes

68
Wide frequency band processes have values close to 2/3 and smaller than 0.85. Narrow
frequency band seismic-processes of long predominant period (i.e. superposition of a single
harmonic process at a short predominant frequency, f
p
and a wide band process) are
characterized by values greater than 0.95.

Structural Reliability and Risk Analysis Lecture Notes

69
7. DYNAMIC RESPONSE OF SDOF SYSTEMS TO RANDOM PROCESSES

7.1. Introduction
When attention is focused on the response of structural systems to random vibration it is
generally possible to identify an excitation or input and a response or output. The excitation
may be a motion (i.e., acceleration, velocity, or displacement), or a force history. The
response may be a desired motion history or a desired stress history. When the excitation is a
random process, the response quantity will also be a random process. The central problem of
this chapter is the determination of information regarding the response random process from
corresponding information regarding the excitation process.
The excitation time-history is x(t) and the response time-history is y(t). In preparation for the
case where input and output are both random vibrations we review the case where they are
individual particular functions. If x(t) is a specified deterministic time history then it is
possible to obtain a specific answer for y(t) by integrating the differential equations of motion,
subject to initial conditions.
It is a property of linear time-invariant systems that when the excitation is steady state simple
harmonic motion (without beginning or end) then the response is also steady state simple
harmonic motion at the same frequency. The amplitude and phase of the response generally
depend on the frequency. A concise method of describing the frequency dependence of the
amplitude and phase is to give the complex frequency response or transfer function H().
This has the property that when the excitation is the real part of e
it
then the response is the
real part of H()e
it
. The complex transfer function H() is obtained analytically by
substituting

t i
e x

= (7.1)
( )
t i
e H y

= (7.2)
in the equations of motion, cancelling the e
it
terms, and solving algebraically for H().
Since H() is essentially an output measure for unit input its dimensions will have the
dimensions of the ratio y/x.
Knowledge of a transfer function H() for all frequencies contains all the information
necessary to obtain the response y(t) to an arbitrary known excitation x(t). The basis for this
remark is the principle of superposition, which applies to linear systems. The superposition
here is performed in the frequency domain using Fouriers method. When x(t) has a period
then it can be decomposed into sinusoids forming a Fourier series. The response to each
sinusoid, separately, is provided by H() and these responses form a new Fourier series
representing the response y(t). When x(t) is not periodic but has a Fourier transform
( ) ( )

= dt e t x X
t i
(7.3)
Structural Reliability and Risk Analysis Lecture Notes

70
then an analogous superposition is valid. For each frequency component separately Equations
(7.1) and (7.2) yield:
( ) ( ) ( ) X H Y = (7.4)
as the Fourier transform of the response y(t). The response itself is given by the inverse
Fourier transform representation
( ) ( )


d e Y t y
t i
2
1
(7.5)

7.2. Single degree of freedom (SDOF) systems
To illustrate the previous ideas one considers the case of seismic action on a SDOF system;
the system is shown in Figure 7.1 where a single mass is attached to a moving support by
means of a linear rod in parallel with a linear dashpot. One supposes that the motion of the
support is known in terms of its acceleration a(t) and that one is interested in the relative
displacement y(t) between the mass and the support.

DOF
) (t y
) (t a
k c,
) ( ), ( ), ( t d t v t a - action
) (t y - response


a(t)
y(t)
A(i) Y(i)
h(t)

H(i)

System

Figure 7.1. SDOF system and excitation-response diagram associated with this system

The law of motion provides the following differential equation
0 ) ( ) ( ) ( = + + t F t F t F
restoring damping inertia
(7.6)
which simplifies to
d(t)
Structural Reliability and Risk Analysis Lecture Notes

71
) ( ) ( ) ( 2 ) (
2
0 0
t a t y t y t y = + + (7.7)
where:
damping ratio

0
natural circular frequency of the SDOF system.
Equation 7.7 can be solved in the time domain or in the frequency domain.

7.2.1. Time domain
The solution of Equation 7.7 is given by Duhamels integral

=
t
d t h a t y
0
) ( ) ( ) ( (7.8)
where:
t e t h
d
t
d


sin
1
) (
0
=

(7.9)
is the response function to unit impulse and the damped circular frequency is

2
0
1 =
d
(7.10).

7.2.2. Frequency domain
The solution of the problem in the frequency domain is given by adapting Equation 7.4
) ( ) ( ) ( A H Y = (7.11)
where:
A() Fourier transform of the excitation (input acceleration) given by

= dt e t a A
t i
) ( ) ( (7.12)
) ( H - complex transfer function of the system
Y() Fourier transform of the response (considered as relative displacement).
If the excitation is the real part of the complex function
t i
e

, then, the response is the real part
of the complex function
t i
e H

) ( . Consequently, the transfer function H() is obtained by
making the substitution ( )
t i
e t a

= and ( ) ( )
t i
e H t y

= in equation 7.7, cancelling the e
it

terms, and solving algebraically for H():
t i t i t i t i
e e H i e H i e H

= + + ) ( ) ( ) ( 2 ) ( ) (
2
0 0
2

1 ) ( ) ( 2 ) (
2
0 0
2
= + + H H i H
Structural Reliability and Risk Analysis Lecture Notes

72
| | 1 2 ) (
0
2
0
2
= + + i H

(
(

+
|
|
.
|

\
|

=
0
2
0
2
0
2 1
1
) (

i
H (7.13)
Note that here H() has the dimensions of displacement per unit acceleration.
The square of the modulus of the complex transfer function is given by

|
|
.
|

\
|
+
(
(

|
|
.
|

\
|

=
2
0
2
2
2
0
4
0
2
4 1
1
) (

H (7.14)
The modulus of the non-dimensional complex transfer function is given by, Figure 7.2

2
0
2
2
2
0
0
4 1
1
) (
|
|
.
|

\
|
+
(
(

|
|
.
|

\
|

H (7.15)
which is related to the modulus of the complex transfer function by
) (
1
) (
0
2
0

i H i H = (7.16)
The maximum value of the maximum response is reached for =
0
and is given by, Figure
7.2

2
1
2
1 1
) ( max
2
0
max
= =
k
m
t y (7.17)
where m is the mass of the SDOF system and k is the stiffness of the SDOF system
(
2
0
m k = ).


) ( i H
) (
0
i H

1
2
1 1
2
0

2
1
0
0

Figure 7.2. Modulus of the complex transfer function
and of the non-dimensional transfer function
Structural Reliability and Risk Analysis Lecture Notes

73
Once the solution of Equation 7.11 is computed, one can go back in time domain by using the
inverse Fourier transform

+

=


d e i Y t y
t i
) (
2
1
) ( (7.18).

7.3. Excitation-response relations for stationary random processes
The main assumptions used in the following are, Figure 7.3:
o SDOF system

o action

o system with elastic behaviour
The previous section has dealt with the excitation-response or input-output problem for linear
time-invariant systems in terms of particular responses to particular excitations. In this section
the theory is extended to the case where the excitation is no longer an individual time history
but is an ensemble of possible time histories; i.e., a random process. When the excitation is a
stationary random process then the response is also a stationary random process. The
statistical properties of the response process can be deduced from knowledge of the system
and of the statistical properties of the excitation.

) (t a ) (t y
t
t

Figure 7.3. Excitation-response processes

7.3.1. Mean value of the response
The relation between the mean of the output random process, m
y(t)
or the expected value,
E[y(t)] and the mean of the input random processes, E[x(t)] is:
( ) | | ( ) | | ( ) 0 H t x E t y E = (7.19)
Note that E[y(t)] is actually independent of t.
In particular, if the input process is zero mean than so does the output process. If the input is
an acceleration zero-mean process:
| | 0 ) ( ) ( = = = const t a E t m
a

the response process is also zero-mean process:
Stationary
Ergodic
Zero-mean
Normal
response
Stationary
Ergodic
Zero-mean
Normal
Structural Reliability and Risk Analysis Lecture Notes

74
( ) | | 0 ) (
1
) ( ) 0 ( ) (
2
0
= = = = t m t m H t y E t m
a a y

(7.20).

7.3.2. Input-output relation for spectral densities
The relation between the PSD of the excitation x(t) and the spectral density of the response
y(t) is simply
) ( ) ( ) ( ) (
x y
S H H S = (7.21)
A slightly more compact form is achieved by noting that the product of H() and its complex
conjugate may be written as the square of the modulus of H(); i.e.,
) ( ) ( ) (
2

x y
S H S = (7.22).
The power spectral density of the response is equal to the square of modulus of the transfer
function multiplied with the power spectral density of the excitation. Note that this is an
algebraic relation and only the modulus of the complex transfer function H() is needed.

7.3.3. Mean square response
The mean square value E[y
2
] of the stationary response process y(t) can be obtained, when the
spectral density S
y
() of the response is known, according to relation (7.22). Thus, if the input
PSD S
x
() is known, the mean square value of the stationary response process y(t) is given
by:
| |

d S H d S H d S y E
x x y
+

+

+

= = = ) ( ) (
1
) ( ) ( ) (
2
0 4
0
2
2
(7.23)
In particular, if the output process has zero mean, the mean square value is equal to the
variance of the stationary response process y(t) and relation (7.23) turns to:


+

+

= = d S H d S
x y y
) ( ) ( ) (
2
2
(7.24)
The procedure involved in relation (7.24) is depicted in Figure 7.4 for stationary ergodic zero-
mean acceleration input process.

Structural Reliability and Risk Analysis Lecture Notes

75

) (
a
S
2
) ( i H
) (
y
S

1 p

+

= =
2
) (
a a
d S Area

+

= =
2
) (
y y
d S Area
2 4
0
4
1 1


4
0
1

1 p
- first predominant circular frequency


Figure 7.4. PSD of output process from the PSD of the input process and transfer function

7.4. Response of a SDOF system to stationary random excitation
As an application of the general results of the preceding section one studies the simply
vibratory system illustrated in Figure 7.1. The excitation for this system is the seismic
acceleration a(t) at the foundation level and the response y(t) is the relative displacement
between the foundation and the suspended mass. One shall consider two cases: first when the
excitation is white noise; and, second, when the vibratory system is low damped.

7.4.1. Response to band limited white noise
The excitation a(t) is taken to be a stationary random process with band limited white
spectrum with uniform spectral density S
0
. The response y(t) will also be a random process.
Because the excitation is stationary the response process will also be stationary. If, in
addition, the input is known to be ergodic then so also will be the output. Similarly, if it is
known that the input is a normal or Gaussian process then one knows that the output will also
be a normal process. Aside from these general considerations the results of the previous
sections permit us to make quantitative predictions of output statistical averages (which are
independent of the ergodic or Gaussian nature of the processes).
Structural Reliability and Risk Analysis Lecture Notes

76
Thus, let us suppose that the excitation has zero mean. Then according to relation (7.19) the
response also has zero mean.
The spectrum of the input is simply the constant S
0
. The spectrum of the output follows from
Equation 7.22, Figure 7.5:
( )
0
2
0
2
2
2
0
4
0
0
2
4 1
1
) ( S S H S
y

|
|
.
|

\
|
+
(
(

|
|
.
|

\
|

= =

(7.25)
The mean square of the response (equal to the variance of the process in case of zero-mean
processes) can be deduced from the spectral density of the response using:

+

= =

d S i H
a y
) ( ) (
1 2
0 4
0
2


3
0
0 0
4
0
0
0
2
0 4
0
0
2 2
2
) (

S S
d i H
S
L
L
=

=

+


(7.26)
In case the process is normal or Gaussian it is completely described statistically by its spectral
density. The first-order probability distribution is characterized by the variance alone in this
case where the mean is zero.


) (
a
S
0
S
L

L

2
) ( i H


Figure 7.5. Band limited white noise and transfer function of the system

7.4.2. SDOF systems with low damping
For the systems with low damping ( 1 . 0 < ), the transfer function of the system has a very
sharp spike at =
0
and the amplitudes of the function decreases rapidly as moving away
from
0
. At a distance of
0
from
0
the amplitude of the transfer function is half the
Structural Reliability and Risk Analysis Lecture Notes

77
maximum amplitude. Because of this particular aspect of the transfer function of SDOF
systems with low damping, the product under the integral in relation (7.24) will raise
significant values only in the interval
0

0
and one can assume that the PSD of the input
excitation is constant in this interval and equal to ) (
0

a
S , Figure 7.6. Thus, relation (7.26)
valid for band-limited white noise is also applicable and the variance of the response process
is equal to
3
0
0 2
2
) (

a
y
S
.


) (
a
S
2
) ( i H
) (
y
S

ct a =
2 4
0
4
1 1


2 4
0
4
1 1
2
1


0
2
0

non-resonant part of PSD


resonant part of PSD
2 2 2 2
REZONANCE
neglected
REZ NON REZONANCE y
+ =




Figure 7.6. Computation of response process variance for SDOF systems with low damping

The previous analytical developments are illustrated below considering as input the PSD of
March 4, 1977 seismic ground motion recorded at INCERC station on N-S direction and two
SDOF systems with damping ratio =0.05 and natural circular frequency
0
of 12.57 rad/s,
respectively 4.19 rad/s. In Figure 7.7 are represented the input PSD, the transfer function of
SDOF system with =0.05 and
0
=12.57 rad/s and the response PSD; in Figure 7.8 are
depicted the input PSD, the transfer function of SDOF system with =0.05 and
0
=4.19 rad/s
and the response PSD.


Structural Reliability and Risk Analysis Lecture Notes

78
0
0
0
0
0
0
0
0
0 5 10 15 20

S
a
()

0
0
0
0
0
0
0
0
0
0
0 5 10 15 20

H()^2

0
0
0
0
0
0
0
0
0
0
0
0 5 10 15 20

S
y
()

Figure 7.7 PSD of March 4, 1977 INCERC N-S accelerogram (upper), transfer function
of SDOF system with =0.05 and
0
=12.57 rad/s (middle) and PSD of response (lower)

0

Structural Reliability and Risk Analysis Lecture Notes

79
0
0
0
0
0
0
0
0
0 5 10 15 20

S
a
()

0
0
0
0
0
0
0
0
0 5 10 15 20

H()^2

0
0
0
0
0
0
0
0
0
0
0
0 5 10 15 20

S
y
()

Figure 7.8 PSD of March 4, 1977 INCERC N-S accelerogram (upper), transfer function
of SDOF system with =0.05 and
0
=4.19 rad/s (middle) and PSD of response (lower)

0

Structural Reliability and Risk Analysis Lecture Notes

80
Note: If the mean and the variance of the displacement response process are known, the mean
and the variance of the velocity and the acceleration response processes can be found using
the relations:
y y =
0

y y =
2
0

0
0
= =
y y
m m


0
2
0
= =
y y
m m


0
0 2 2
0
2
2
) (



a
y y
S
= =



2
) (
0 0 2 4
0
2 a
y y
S
= =

.

7.4.3. Distribution of the maximum (peak) response values
Let y(t) be a stationary ergodic normal process, Figure 7.9. The distribution of all values of
the process is known to follow a Gaussian distribution type. One is interested in the
distribution of the maximum response values.
Let
b
+
be the expected frequency of crossing the level x=b (b is called barrier or threshold
value) with positive slope (upcrossing). Basically, one counts how many times the process
intersects the barrier in unit time with positive slope, 0 ) ( and ) ( > > t y b t y . The number of
crossings over threshold b with positive slope in unit time, (
+
) (b
) is:
|
|
.
|

\
|
=
|
|
.
|

\
|
=

+

+
+
2
2
2
2
2
) (
2
exp
) (
) (
2
1
2
exp
2
1
y
y
y
y y
y
b
b
d S
d S
b


(7.27)
) (t y
) (t y
) (t y
f t
Distribution of maximum
Gauss (normal) all values

Figure 7.9. Distribution of all values and of the maximum values
of the stationary ergodic normal response process y(t)
Structural Reliability and Risk Analysis Lecture Notes

81
If one uses the spectral moments

+

= d S
y
i
i
) ( , the variance of the response is equal to the
spectral moment of order zero,
2
0 y
= and the variance of the first derivative of the response
is equal to the spectral moment of second order,
2
2 y
= and relation (7.27) becomes:

|
|
.
|

\
|
=
+
2
2
0
2
) (
2
exp
2
1
y
b
b

(7.28)
The expected frequency, in crossings per unit time, of zero crossings with positive slope (the
average number of zero crossings with positive slope in unit time) is obtained setting b=0 in
relation (7.28):

0
2
0
2
1

=
+
(7.29)
Finally, combining relations (7.28) and (7.29), one gets the expected number of crossing the
level b with positive slope:

|
|
.
|

\
|
=
+ +
2
2
0 ) (
2
exp
y
b
b

(7.30)
For getting the distribution of the peak values, y
max
of the response process one uses the
Davenports approach. The maximum (peak) response is normalized as:

y
y

max
= (7.31)
The normalized response process as well as the distribution of the peak values of the response
are represented in Figure 7.10.

f
y
y

max

Figure 7.10. Normalized response and distribution of peak values

Structural Reliability and Risk Analysis Lecture Notes

82
Let us consider the issue of up-crossing the barrier y
max
, Figure 7.11. Setting the barrier b =
y
max
in relation (7.30) one gets the expected number of crossing the level y
max
with positive
slope:

|
|
.
|

\
|
=
+ +
2
2
max
0
2
exp
max
y
y
y

(7.32).
t
y

Figure 7.11. Upcrossing the level y
max
by the response process

Combining relations (7.31) and (7.32) one obtains:

|
|
.
|

\
|
=
|
|
.
|

\
|
=
+ + +
2
exp
2
exp
2
0 2
2 2
0
max



y
y
y
(7.33).
If the barrier b =y
max
is high enough the upcrossings are considered rare independent events
that follows a Poisson type of probability distribution. If the expected number of upcrossings
level y
max
in time t is t
y

+
max
, the probability of having n upcrossings in that time interval is:
| |
( )
!
) exp(
max max
n
t t
n N P
y
n
y

= =
+ +

(7.34).
Setting n=0 in relation (7.34) one gets the probability of having zero upcrossings in time t,
thus the probability of non-exceedance level y
max
in time t, that is the cumulative distribution
function of the random variable level y
max
:
) exp(
max max
t F
y y
=
+
(7.35).
Since the relation between y
max
and is linear, the cumulative distribution function of y
max
is
also valid for . Moreover, combining relations (7.33) and (7.35) one gets the double
exponential form of the cumulative distribution function of :
2
2
0
2
0
2
exp exp


+
+
=
|
|
.
|

\
|
|
|
.
|

\
|
=
e
e t F
t
(7.36)
barrier b =y
max
Structural Reliability and Risk Analysis Lecture Notes

83
The probability distribution function, PDF is obtained by differentiating the CDF with respect
to :

d
dF
f = (7.37).
The probability density function and the cumulative distribution function of the normalized
peak response are represented in Figure 7.12 and Figure 7.13, respectively.
The mean value of the distribution,

m (mean of the peak values of normalized response) is


( )
( ) t
t m

+ =
+
+
0
0
ln 2
577 . 0
ln 2

(7.38)
while the standard deviation of the peak values of normalized response is
( ) t
=
+
0
ln 2
1
6

(7.39)
Recalling that the relation between y
max
and is linear and combining relations (7.31) and
(7.38) one gets the mean of the peak values of response:
( )
( )
y y y
t
t m m


|
|
.
|

\
|

+ = =
+
+
0
0
ln 2
577 . 0
ln 2
max
(7.40).
0.0
0.2
0.4
0.6
0.8
1.0
1.2
1.4
0 1 2 3 4 5 6 7

f
h
100
1000
10000

Figure 7.12 Probability density function of the normalized peak response
for various
0
t values

0
+
t =
Structural Reliability and Risk Analysis Lecture Notes

84
0.0
0.1
0.2
0.3
0.4
0.5
0.6
0.7
0.8
0.9
1.0
0 1 2 3 4 5 6 7

F
h
100
1000
10000

Figure 7.13 Cumulative distribution function of the normalized peak response
for various
0
t values

Finally, one has to get the peak response with p non-exceedance probability in the time
interval t, Figure 7.14 that is given by the following relation:

y t p
t p y =
, max
) , ( (7.41).

f
p
excedance probability: p 1

Figure 7.14. Distribution of peak normalized values of the response

Returning to relation (7.36), the CDF gives the non-exceedance probability p of
t p,
:
2
2
,
0
) (
,
t p
t e
e p F
t p

= = (7.42).

0
+
t =
Structural Reliability and Risk Analysis Lecture Notes

85
It follows from relation (7.42) that the peak normalized response with p non-exceedance
probability in the time interval t is:

2
2
,
0
) ln(
t p
t e p



+
=

2
2
,
ln ln )) ln( ln(
0
2
2
,
0
t p
e p t
t p
t

|
.
|

\
|
=
|
|
|
|
|
|
.
|

\
|
=
+


+

( ) ( ) p
t p
t ln ln 2 ln 2
2
, 0
|
.
|

\
|
=
+

( ) ( ) p t
t p
ln ln 2 ln 2
0
,
|
.
|

\
|
=
+
(7.43).
The peak normalized response with p non-exceedance probability in the time interval t,
t p,
is
represented in Figure 7.15.

2
3
4
5
6
7
1.E+00 1.E+01 1.E+02 1.E+03 1.E+04 1.E+05 1.E+06 1.E+07
99 . 0 = p
5 . 0 = p
T
0


Figure 7.15. Peak normalized response with p non-exceedance probability
in the time interval t

The peak response with p non-exceedance probability in the time interval t, y
max
(p,t):
Structural Reliability and Risk Analysis Lecture Notes

86
( ) ( )
y y t p
p t p y t |
.
|

\
|
= =
+
ln ln 2 ln 2 ) , (
0
, max
(7.44).
In the case of seismic response of SDOF systems one has to consider the issue of out-crossing
the barrier
max
y , Figure 7.16; since it is a double-barrier problem, one has replace in the
previous relations (7.33, 7.36, 7.387.40, 7.427.44) t
+
0
with t
+
0
2 .
5.000 50.000 55.000 60.000 65.000
t
y (t )

Figure 7.16. Outcrossing of level
max
y by the response process


barrier b =y
max
barrier b =-y
max
Structural Reliability and Risk Analysis Lecture Notes

87
8. STOCHASTIC MODELLING OF WIND ACTION

The stochastic modelling of wind action on buildings and structures is presented according to
the Romanian technical regulation NP 082-04 Cod de proiectare privind bazele proiectarii si
actiuni asupra constructiilor. Actiunea vantului Design Code. Basis of Design and Loads on
Buildings and Structures. Wind Loads, that is in line with and compatible to EUROCODE
1991: Actions on structures Part 1-4: General actions Wind actions.

8.1. General
Wind effects on buildings and structures depend on the exposure of buildings, structures and
their elements to the natural wind, the dynamic properties, the shape and dimensions of the
building (structure).
The random field of natural wind velocity is decomposed into a mean wind in the direction of
air flow (x-direction) averaged over a specified time interval and a fluctuating and turbulent
part with zero mean and components in the longitudinal (x-) direction, the transversal (y-)
direction and the vertical (z-) direction.
The sequence of maximum annual mean wind velocities can be assumed to be a Gumbel
distributed sequence with possibly direction dependent parameters. The turbulent velocity
fluctuations can be modelled by a zero mean stationary and ergodic Gaussian process.

8.2 Reference wind velocity and reference velocity pressure
The reference wind velocity
min 10
ref
U is the wind at 10m above ground, in open and horizontal
terrain exposure (category II), averaged on 10 min time interval (EUROCODE 1 and NP 082-
04). For other than 10 min averaging intervals, in open terrain exposure, the following
relationships may be used:
s
ref
U U U U
3 1min
ref
10min
ref
1h
ref
67 . 0 0.84 05 . 1 = = = .
(8.1)
The distribution of maximum annual mean wind velocity fits a Gumbel distribution for
maxima:
)]} ( exp[ exp{ ) (
1 1
u x x F
U
= (8.2)
The mode u and the parameter
1
of the distribution are determined from the mean m
1
and the
standard deviation
1
of the set of maximum annual mean wind velocities:
1
1
577 . 0
=

m u ,
1
282 . 1

= . (8.3)
Structural Reliability and Risk Analysis Lecture Notes

88
The coefficient of variation of maximum annual wind velocity, V
1
=
1
/ m
1
depends on the
climate and is normally between 0.10 and 0.40; the mean of the maximum annual wind
velocities is usually between 10 m/s to 50 m/s.
The lifetime (N years) maxima of wind velocity are also Gumbel distributed. The mean and
the standard deviation of lifetime maxima are functions of the mean and of the standard
deviation of annual maxima:
)]} ( exp[ exp{ ) (
N N
N
U
u x x F = (8.4)
1 1 N
1.282
N ln
m m + = ,
N
=
1
. (8.5)
The reference wind velocity having the probability of non-exceedance during one year, p =
0.98, is so called characteristic velocity, U
0.98
. The mean recurrence interval (MRI) of the
characteristic velocity is T=50 yr. For any probability of non-exceedance p, the fractile U
p
of
Gumbel distributed random variable can be computed as follows:
( )
1 1
ln ln
282 . 1
1
45 . 0
(

+ = p m U
p
(8.6)
The characteristic velocity can be determined as follows:
1 1 98 . 0 50
593 . 2 + = =
=
m U U
yr IMR
(8.7)
Generally, it is not possible to infer the maxima over dozens of years from observations
covering only few years. For reliable results, the number of the years of available records
must be of the same order of magnitude like the required mean recurrence interval.
The reference wind velocity pressure can be determined from the reference wind velocity
(standard air density =1.25kg/m
3
) as follows:
| | | | s m U U Pa Q
ref ref ref
/ 625 . 0
2
1
2 2
= = (8.8)
The conversion of velocity pressure averaged on 10 min into velocity pressure averaged on
other time interval can be computed from relation (8.1):
s 3
ref
1min
ref
10min
ref
1h
ref
Q 44 . 0 0.7Q Q Q 1 . 1 = = = . (8.9)

8.3 Probabilistic assessment of wind hazard for buildings and structures
The meteorological existing database comprises the maximum annual wind velocity (at 10 m,
averaged on 1 min) at more than 120 National Institute of Meteorology and Hydrology
(INMH) locations in Romania, at which wind measurements are made. The period of records
for those locations is between 20 to 60 years, up through 2005.
The extreme value distribution used for the estimation of the characteristic wind velocity is
the Gumbel distribution for maxima. The distribution is recommended by American Standard
Structural Reliability and Risk Analysis Lecture Notes

89
for Minimum Design Loads for Buildings, ASCE 7 and fits very well the available Romanian
data.
The coefficient of variation of the maximum annual wind velocity is in the range V
U
=
0.150.3 with mean 0.23 and standard deviation 0.06. The coefficient of variation of the
velocity pressure (square of the velocity) is approximately double the coefficient of variation
of the velocity:
V
Q
= 2 V
U
. (8.10)
The Romanian climate is softer in the intra-Carpathians area, in Transylvania than in
Moldavia and central Walachia. The North-eastern part of Moldova has the highest, in
Romania, reference wind velocity. Locally, in the mountain areas of the South-western part of
Romania, there are very high velocities, too.
The reference wind velocity pressure Q
ref
(in kPa) with mean recurrence interval MRI=50 yr.
averaged on 10 min. at 10 m above ground in open terrain is presented in Figure 8.1.


Structural Reliability and Risk Analysis Lecture Notes

91

Figure 8.1. The reference wind velocity pressure, [kPa] with mean recurrence interval MRI=50 yr.
wind velocity averaged on 10 min. at 10 m above ground in open terrain. The mapped values are valid for sites situated bellow 1000 m

Structural Reliability and Risk Analysis Lecture Notes
92
8.4 Terrain roughness and Variation of the mean wind with height
The roughness of the ground surface is aerodynamically described by the roughness length, z
o
,
(in meters), which is a measure of the size of the eddies close to the ground level, Table 8.1.
Alternatively, the terrain roughness can be described by the surface drag coefficient, ,
corresponding to the roughness length, z
o
:

2
0
2
0
ref
z
10
ln 5 . 2
1
z
z
ln
k
=
|
|
|
|
.
|

\
|
=
|
|
|
|
|
.
|

\
|
(8.11)
where k 0.4 is von Karmans constant and z
ref
=10 m.
Various terrain categories are classified in Table 8.1 according to their approximate roughness
lengths. The distribution of the surface roughness with wind direction must be considered.

Table 8.1. Roughness length z
o
, in meters, for various terrain categories
1) 2)

Terrain
category
Terrain description z
o
[m] z
min
[m]
0 Sea or coastal area exposed to the open sea 0.003 1
I Lakes or flat and horizontal area with negligible vegetation and
without obstacles
0.01 1
II Area with low vegetation such as grass and isolated obstacles
(trees, buildings) with separations of at least 20 obstacle heights
0.05 2
III Area with regular cover of vegetation or buildings or with
isolated obstacles with separations of maximum 20 obstacle
heights (such as villages, suburban terrain, permanent forest)
0.3 5
IV Area in which at least 15 % of the surface is covered with
buildings and their average height exceeds 15 m
1.0 10

1)
Smaller values of z
o
produce higher mean velocities of the wind
2)
For the full development of the roughness category, the terrains of types 0 to III must prevail in the
up wind direction for a distance of at least of 500m to 1000m, respectively. For category IV this
distance is more than 1 km.

The variation of the mean wind velocity with height over horizontal terrain of homogenous
roughness can be described by the logarithmic law:

0
0 *
z
z
ln ) z ( u
k
1
) z ( U = for z > z
min
>> z
o
(8.12)
where:

Structural Reliability and Risk Analysis Lecture Notes
93
) z ( U - the mean velocity of the wind at height z above ground, m/s
z
o
-

the roughness length, m
k - von Karmans constant (k 0.4)
z
min
- the lowest height of validity of relation (8.12)
) z ( u
0 *
- friction (shear) velocity defined as function of surface frictional shear stress,
and expressed as:

0
0 *
z
z
ln 5 . 2
) z ( U
) z ( u = . (8.13)
The logarithmic profile is valid for moderate and strong winds (mean velocity >10 m/s) in
neutral atmosphere (where the vertical thermal convection of the air may be neglected). Even
relation (8.12) holds true through the whole atmospheric boundary layer, its use is
recommended only in the lowest 200m, or 0.1, where is the depth of the boundary layer.
Relation (8.12) is more precisely valid as:

0
0
0
z
d z
ln ) z ( u
k
1
) z ( U
*

= ; z d
o
. (8.14)
where d
o
is close to the average height of dominant roughness elements. In engineering
practice, relation (8.14) is very conservatively used with d
o
=0.
With respect to the reference height, z
ref
=10m and reference (open terrain) exposure, z
o,ref
, the
relation between wind velocities in two different roughness categories at two different heights
can approximated as:

ref
ref
ref
ref
ref
ref
z
z
z
z
z
z
z
z
z
z
z u
k
z u
k
U
z U
, 0
0
07 . 0
, 0
0
, 0
0
, 0
*
0
*
ln
ln
ln
ln
) (
1
) (
1
) (
|
|
.
|

\
|
= = . (8.15)
The mean wind velocity pressure at height z is defined by:
) (
2
1
) (
2
z U z Q = (8.16)
where is the air density ( =1.25 kg/m
3
for standard air).
The roughness factor describes the variation of the mean velocity pressure with height above
ground and terrain roughness as function of the reference velocity pressure. From relations
(8.15) and (8.16) one gets:
2
0
0
2
2
0
2
, 0
07 . 0
, 0
0
2
2
ln ) ( ln
ln
) ( ) (
) (
|
|
.
|

\
|
=
|
|
.
|

\
|
(
(
(
(
(
(

|
|
.
|

\
|
= = =
z
z
z k
z
z
z
z
z
z
U
z U
Q
z Q
z c
r
ref
ref
ref
ref ref
r
for z>z
min
(8.17a)

Structural Reliability and Risk Analysis Lecture Notes
94
( ) ( )
min r r
z z c z c = = for z z
min
(8.17b)
The roughness factor is represented in Figure 8.2.
0
20
40
60
80
100
120
140
160
180
200
0 0.5 1 1.5 2 2.5 3 3.5
H
e
i
g
h
t

a
b
o
v
e

g
r
o
u
n
d

z
,

m
Roughness factor, c
r
(z)
Terrain category 0
Terrain category I
Terrain category II
Terrain category III
Terrain category IV

Figure 8.2. Roughness factor,
2
0
0
2
r r
z
z
ln ) z ( k ) z ( c
|
|
.
|

\
|
=

8.5. Stochastic modelling of wind turbulence
The stochastic model for along-wind turbulence is represented in Figure 8.3. The wind
turbulence has a spatial character. In the neutral atmospheric surface layer, for z<<, the root
mean square value of the three-dimensional velocity fluctuations in the airflow deviating from
the longitudinal mean velocity can be assumed independent of the height above ground (
*
u -
friction velocity):

2
*
2
u
u u
= Longitudinal (8.18)

2
*
2
u
v v
= Transversal (8.19)

2
*
2
u
w w
= Vertical (8.20).
For z < 0.1h, the ratios
v
/
u
and
w
/
u
near the ground are constant irrespective the
roughness of the terrain (ESDU, 1993):
78 . 0
u
v

(8.21)

Structural Reliability and Risk Analysis Lecture Notes
95
55 . 0
u
w

. (8.22)











Figure 8.3. Stochastic process of the wind velocity at height z above ground:

The variance of the longitudinal velocity fluctuations can be expressed from non-linear
regression of measurement data, as function of terrain roughness (Solari, 1987):
5 . 7 z ln 856 . 0 5 . 4 5 . 4
0
=
u
(8.23).
The intensity of longitudinal turbulence is the ratio of the root mean squared value of the
longitudinal velocity fluctuations to the mean wind velocity at height z (i.e. the coefficient of
variation of the velocity fluctuations at height z):
( )
( )
( ) ) (
,
2
u
z U z U
t z u
z I
u

= = (8.24).
The longitudinal turbulence intensity at height z can be written in the form:
( )
0 0
*
*
u
z
z
ln 5 . 2 ln 5 . 2
z
u u
z
z
u
u
I

=

= for z>z
min
(8.25a)
( ) ( )
min
z z I z I
u
= = for z z
min
(8.25b)

Representative values for intensity of turbulence are represented in Figure 8.4.
The transversal and the vertical intensity of turbulence can be determined by multiplication of
the longitudinal intensity I
u
(z) by the ratios
v
/
u
and
w
/
u
.

U(z,t) = U(z) + u(z,t)
0
0
Mean wind velocity
U(z)
Averaging time interval
of mean velocity (10 min)
t
u(z,t)
U(z,t)
U(z)
t
u(z,t)
Gusts: velocity fluctuations from the mean
U(z,t) = U(z) + u(z,t)

Structural Reliability and Risk Analysis Lecture Notes
96
0
20
40
60
80
100
120
140
160
180
200
0.05 0.10 0.15 0.20 0.25 0.30 0.35 0.40
H
e
i
g
h
t

a
b
o
v
e

g
r
o
u
n
d

z
,

m
Turbulence intensity, I
u
(z)
Terrain category IV
Terrain category III
Terrain category II
Terrain category I
Terrain category 0

Figure 8.4. Intensity of longitudinal turbulence, ( )
0
u
z
z
ln 5 . 2
z
u
I

=

8.6 Gust factor for velocity pressure
The gust factor for velocity pressure is the ratio of the peak velocity pressure to the mean
wind velocity pressure:
( )
( )
( )
( )
( )
( ) | | z I 2 g 1 V g 1
z Q
g z Q
z Q
z q
z c
u q
q peak
g
+ = + =
+
= =

(8.26)
where:
Q(z) is the mean velocity pressure of the wind
( )
2 / 1
2
q
t z, q = - root mean square value of the longitudinal velocity pressure
fluctuations from the mean
V
Q
- coefficient of variation of the velocity pressure fluctuations (approximately equal
to the double of the coefficient of variation of the velocity fluctuations):
V
Q
2 I(z) (8.27)
g - the peak factor for velocity pressure (equal to 3.5 in EN1991-1-4).
The gust factor for velocity pressure is represented in Figure 8.5.

Structural Reliability and Risk Analysis Lecture Notes
97
0
20
40
60
80
100
120
140
160
180
200
1.50 2.00 2.50 3.00 3.50 4.00
H
e
i
g
h
t

a
b
o
v
e

g
r
o
u
n
d


z
,

m
Gust factor for velocity pressure, c
g
(z)
Terrain category IV
Terrain category III
Terrain category II
Terrain category I
Terrain category 0

Figure 8.5. Gust factor for velocity pressure, ( ) ( ) | | z I 2 g 1 z c
u g
+ =

8.7 Exposure factor for peak velocity pressure
The peak velocity pressure at the height z above ground is the product of the gust factor, the
roughness factor and the reference velocity pressure:
Q
g
(z) = c
g
(z) c
r
(z) Q
ref
(8.28)
The exposure factor is defined as the product of the gust and roughness factors:
c
e
(z) = c
g
(z) c
r
(z). (8.29)
The exposure factor is plotted in Figure 8.6.


Structural Reliability and Risk Analysis Lecture Notes
98
0
20
40
60
80
100
120
140
160
180
200
1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 5.0
H
e
i
g
h
t

a
b
o
v
e

g
r
o
u
n
d

z
,

m
Exposure factor, c
e
(z)
Terrain category 0
Terrain category I
Terrain category II
Terrain category III
Terrain category IV

Figure 8.6. Exposure factor, c
e
(z) =c
g
(z) c
r
(z)



Structural Reliability and Risk Analysis Lecture Notes
99

References

- Aldea, A., Arion, C., Ciutina, A., Cornea, T., Dinu, F., Fulop, L., Grecea, D., Stratan,
A., Vacareanu, R., 2004. Constructii amplasate in zone cu miscari seismice puternice,
coordonatori Dubina, D., Lungu, D., Ed. Orizonturi Universitare, Timisoara 2003, ,
ISBN 973-8391-90-3, 479 p.
- Benjamin, J R, & Cornell, C A, Probability, statistics and decisions for civil engineers,
J ohn Wiley, New York, 1970
- Cornell, C.A., A Probability-Based Structural Code, ACI-J ournal, Vol. 66, pp. 974-
985, 1969
- Ditlevsen, O. & Madsen, H.O., Structural Reliability Methods. Monograph, (First
edition published by J ohn Wiley & Sons Ltd, Chichester, 1996, ISBN 0 471 96086 1),
Internet edition 2.2.5 http://www.mek.dtu.dk/staff/od/books.htm, 2005
- EN 1991-1-4, Eurocode 1: Actions on Structures Part 1-4 : General Actions Wind
Actions, CEN, 2005
- FEMA 356, Prestandard and Commentary for the Seismic Rehabilitation of Buildings,
FEMA & ASCE, 2000
- Ferry Borges, J .& Castanheta, M., Siguranta structurilor traducere din limba
engleza, Editura Tehnica, 1974
- FEMA, HAZUS Technical Manual 1999. Earthquake Loss Estimation Methodology,
3 Vol.
- Hahn, G. J . & Shapiro, S. S., Statistical Models in Engineering - J ohn Wiley & Sons,
1967
- Kreyszig, E., Advanced Engineering Mathematics fourth edition, J ohn Wiley &
Sons, 1979
- Kramer, L. S., Geotechnical Earthquake Engineering, Prentice Hall, 1996
- Lungu, D. & Ghiocel, D., Metode probabilistice in calculul constructiilor, Editura
Tehnica, 1982
- Lungu, D., Vcreanu, R., Aldea, A., Arion, C., Advanced Structural Analysis,
Conspress, 2000
- Madsen, H. O., Krenk, S., Lind, N. C., Methods of Structural Safety, Prentice-Hall,
1986
- Melchers, R. E., Structural Reliability Analysis and Prediction, J ohn Wiley & Sons,
2nd Edition, 1999
- MTCT, CR0-2005 Cod de proiectare. Bazele proiectarii structurilor in constructii,
2005

Structural Reliability and Risk Analysis Lecture Notes
100
- MTCT, NP 082-04 Cod de proiectare. Bazele proiectrii i aciuni asupra
construciilor. Aciunea vntului, 2005
- Papoulis, A., Probability, random variables, and stochastic processes. New York:
McGraw-Hill, 1991
- Rosenblueth, E., Esteva, L., Reliability Bases for some Mexican Codes, Probabilistic
Design of Reinforced Concrete Buildings, ACI Publications SP-31, pp. 1-41, 1972

You might also like