Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Why Yule-Walker Should Not Be Used For Autoregressive Modelling

Download as pdf or txt
Download as pdf or txt
You are on page 1of 10

WHY YULE-WALKER SHOULD NOT BE USED FOR

AUTOREGRESSIVE MODELLING

M.J.L. DE HOON, T.H.J.J. VAN DER HAGEN, H. SCHOONEWELLE, AND H. VAN DAM
Interfaculty Reactor Institute, Delft University of Technology
Mekelweg 15, 2629 JB Delft, The Netherlands

Abstract - Autoregressive modelling of noise data is widely used for system


identification, surveillance, malfunctioning detection and diagnosis. Several methods are
available to estimate an autoregressive model. Usually, the so-called Yule-Walker
method is employed. The various estimation methods generally yield comparable
parameter estimates. In some special cases however, involving nearly periodic signals,
the Yule-Walker approach may lead to incorrect parameter estimates. Burgs method
offers the best alternative to Yule-Walker. In this paper a theoretical explanation of this
phenomenon is given, while the 1994 IAEA Benchmark test is presented as a practical
example of Yule-Walker yielding poor parameter estimates.

I. INTRODUCTION
Autoregressive modelling of noise data was introduced in nuclear engineering in the mid seventies and
gained popularity during the decades thereafter. A historical survey of the gradual acceptation and the
diversity of its applications can be found in the so-called SMORN-proceedings (SMORN-III, SMORN-IV,
SMORN-V). Nowadays, autoregressive modelling is a widely used means for performing system
identification, surveillance, malfunctioning detection and diagnosis. Its attractiveness stems, among others,
from the fact that the numerical algorithms involved are rather simple.
An autoregressive model depends on a limited number of parameters, which are estimated from
measured noise data. Several methods exist to estimate the autoregressive parameters, such as leastsquares, Yule-Walker and Burgs method. It can be shown that for large data samples these estimation
techniques should lead to approximately the same parameter estimates. Mainly for historical reasons, most
people use either the Yule-Walker or the least-squares method. This paper will show, however, that in
some special cases the Yule-Walker estimation method leads to poor parameter estimates, even for
moderately sized data samples. Least squares should not be used either, as it may lead to an unstable
model. Burgs method is preferable.
In section II, we will present an overview of the basics of autoregressive modelling. The mathematical
circumstances causing poor parameter estimates in case of the Yule-Walker technique are described in
section III, while some simulations of autoregressive processes are discussed that support our hypotheses.
Finally, in section IV, we will illuminate our findings with the application of autoregressive modelling for
anomaly detection in the 1994 IAEA Benchmark noise data (Journeau, 1994).

II. THEORY OF AUTOREGRESSIVE MODELLING


The successive samples yt of an autoregressive process linearly depend on their predecessors:
y t + a 1 y t -1 + a 2 y t -2 +L+ a p y t - p = h t ,

(1)

in which ai are the autoregressive parameters and the innovations ht are a stationary purely random
process with zero mean. It can be shown that the autocovariance function Rt for delays 0 to p is related to
the autoregressive parameters ai through the Yule-Walker equation for the autoregressive process
(Priestley, 1994):
R0

R1
M

R p-1

R1
R0
M
R p-2

L R p-1 a1
R1


L R p- 2 a2
R2
=
M.
O
M M


L R0 a p
Rp

An estimated autoregressive model of the same order p can be written as


yt + a$1 yt -1 + a$2 yt -2 +L+a$ p yt - p = h$t ,

(2)

(3)

in which a$i are the autoregressive-parameter estimates and h$ t are the estimated innovations. A clear
distinction should be made between the autoregressive process (Eq. (1)) and the corresponding
autoregressive model (Eq. (3)) (Broersen and Wensink, 1993). Using Eq. (3), each data sample can be
predicted from its predecessors:
p

y$t = - a$i yt -i .
i =1

(4)

As the samples yt cannot be predicted exactly, a residue is introduced, which is defined as the difference
between the measured value and the estimated value:
residue yt - y$t = h$ t ,

(5)

which means that the residue is equal to the estimated innovation, as introduced in Eq. (3).
It is assumed in these equations that the autoregressive model order p is known. In practice, the model
order has to be estimated as well, which is usually done using Akaikes criterion (Priestley, 1994).
Suppose that the estimation realisation y consists of N data points (an estimation realisation contains
those data points that are used for parameter estimation). Three methods of autoregressive-parameter
estimation from these data samples shall be considered here, being the least-squares approach (LS), the
Yule-Walker approach (YW) and Burgs method (Burg):
LS: The total squared residue over the data samples p + 1 to N is minimised, leading to a system of
linear equations:
c11

c21
M

c p1

c12
c22
M
c p2

L c1 p a$1
c01


L c2 p a$ 2
c02
=
M ,
O M M


L c pp a$ p
c0 p

(6)

in which the matrix elements


N
1
cij
yt - i y t - j
N - p t = p +1

form an unbiased estimate of the autocovariance function for delay i - j.


YW: The first and last p data points are also included in the summation of Eq. (7), resulting in

(7)

R$0

R$1
M

R$
p-1

R$1
R$0

R$1
L R$ p -1 a$1


L R$ p-2 a$2
R$2
= - ,
O
M M
M


R$
L R$0 a$ p
p

R$ p-2

(8)

in which the matrix elements R$t constitute a biased estimate of the autocovariance function (Parzen,
1961):
1
R$t
N

y y

t =t +1

t -t

(9)

The Levinson-Durbin algorithm provides a fast solution of a system of linear equations containing a
Toeplitz-style matrix as in Eq. (8). Both Eqs. (6) and (8) are in fact approximations to the Yule-Walker
process equation (Eq. (2)).
Burg: The parameter estimation approach that is nowadays regarded as the most appropriate, is known
as Burgs method. In contrast to the least-squares and Yule-Walker method, which estimate the
autoregressive parameters directly, Burgs method first estimates the reflection coefficients, which are
defined as the last autoregressive-parameter estimate for each model order p. From these, the parameter
estimates are determined using the Levinson-Durbin algorithm. The reflection coefficients constitute
unbiased estimates of the partial correlation coefficients.
Usually, these estimation methods lead to approximately the same results for the autoregressive
parameters. Once these have been estimated from the time series y, the autoregressive model can be
applied to an independent prediction realisation x of the same stochastic process. In terms of x, the
autoregressive process (Eq. (1)) can be written as

xt + a1 xt -1 + a2 xt -2 +L+a p xt - p = e t ,

(10)

in which the innovation process e t is statistically identical to the innovation process ht . The
corresponding autoregressive model can be written as in Eq. (3):
xt + a$1 xt -1 + a$2 xt -2 +L+a$ p xt - p = e$t ,
(11)
in which a$i are the autoregressive parameters estimated from realisation y and e$t are the estimated
innovations. As in Eq. (4), each data sample can be estimated from its predecessors:
p

x$t = - a$i xt -i .
i =1

(12)

The difference between the measured value and the estimated value is now defined as the prediction error:
prediction error xt - x$t = e$ t .

(13)

The prediction error is therefore equal to the estimated innovation, as introduced in Eq. (11). Each
prediction error can be calculated once the actual value of the data point is measured.
A clear distinction should be made between the residue and the prediction error and their variances
(Broersen and Wensink, 1993). The residual variance var(h$t ) is a measure for the fit of the autoregressive
model to those data that have been used for the estimation of the autoregressive parameters, and can be
estimated from the realisation y, which is used for the parameter estimation:
va$r (h$t ) =

N
1
2
yt - y$ t ) .
(

N - p t = p+1

(14)

For the prediction of future data, instead of the residual variance, the variance of the prediction error
var (e$ t ) is essential. If the independent prediction realisation x contains N data samples, the prediction
error variance can be estimated from the sample variance:

va$r (e$t ) =

N
1
2
xt - x$t ) .
(

N'
- p t = p+1

(15)

The LS parameter estimation is based on the minimisation of the residual variance. Such a minimisation
however does not imply that the variance of the prediction error is minimised as well. As usually the
minimisation of the prediction error variance is our goal, the LS estimation of the autoregressive
parameters is not necessarily superior to YW or Burgs method.
III. STABILITY, POLE LOCATION AND PARAMETER ESTIMATION
For large data samples, the difference between the estimates obtained by the various methods will be
small (Priestley, 1994). In some special cases however, substantial differences may arise between these
approaches even for data samples of moderate size. In the present paper it will be shown that YW should
always be avoided. The behaviour of the autoregressive process as described by its pole locations is
essential in this respect.
Using the backward shift operator z -1 , which is defined as z -1 xt = xt -1 , and defining a0 1, a realisation
of an autoregressive process can be expressed in terms of the innovations sequence e t as:
-1

xt = z ai z p-i e t ,
i =0

(16)

ignoring the so-called complementary function (Priestley, 1994). The autoregressive operator
-1
p

p
p -i
z ai z obviously contains a p-fold zero at z = 0, as well as p poles determined by the
i =0

characteristic equation of the autoregressive process


p

a z
i=0

p -i

= 0.

(17)

The roots of characteristic equation should lie inside the unit circle to ensure the autoregressive process to
be stable. If the roots lie on the unit circle, the autoregressive process will only be stationary in case of e t
being identical to zero. In that case a harmonic process will result, consisting of a sum of cosine functions.
One might wonder what will happen if the autoregressive process has poles in the neighbourhood of the
unit circle. As poles on the unit circle represent a harmonic process, an autoregressive process with poles
near the unit circle can be expected to demonstrate some kind of pseudo-periodic behaviour (Priestley,
1994). In this case the autocovariance function can be described as a sum of weakly damped periodic
functions. Furthermore, as the noise terms e t are still present, the autoregressive process may exhibit a
kind of almost non-stationary behaviour. Finally, the partial autocorrelation coefficients will be close to
unity in absolute value. In the context of linear filtering theory, this means that the transfer function
relating xt to e t will be close to instability in the filtering sense (Oppenheim, 1978).
The pole locations will also affect the reliability of the various parameter estimation techniques. It is
claimed by Priestley (1994) that YW may lead to poor parameter estimates, even for moderately large data
samples, if the autoregressive operator has a pole near the unit circle. This is the more remarkable since LS
and YW only differ in their treatment of the first and last p data points. Since this limited number of data
points is relatively small compared to the total number of data points used for parameter estimation, it
would be expected that LS and YW lead to almost the same results.
In this paper, instead of the pole locations, the poor condition of the autocovariance matrix in Eq. (2) is
regarded as the cause of poor YW estimates. Side-effects of the poor autocovariance matrix condition are
an almost non-stationary, pseudo-periodic behaviour of the autoregressive process as well as poles located
closely to the unit circle and partial autocorrelation coefficients close to unity. These features can be used
to detect the possibility of poor YW estimates, but should not be regarded as its cause. It should be noted

that autoregressive processes are possible having poles near the unit circle while the autocovariance matrix
is well conditioned. In those cases YW will still provide correct results for the autoregressive parameters.
To introduce the concept of matrix conditioning, we consider a general system of linear equations:
Ax = B,

(18)

in which A is a matrix of order p and B is a vector of dimension p. A well-known result from linear
algebra states that Eq. (18) cannot have one single solution if the matrix A is singular:
det(A) = 0.

(19)

In cases in which matrix A is almost singular, the solution of Eq. (18) will be extremely sensitive to
perturbations in either matrix A or vector B. The sensitivity to these perturbations can be measured with
the so-called condition number, which is defined as
k ( A) = A A - 1 ,

(20)

where is some matrix norm (Cybenko, 1980). In our case the 1-norm will be considered:
p

A = max Aij : j { 1,2,..., p} ,


i=1

(21)

in which Aij denote the matrix elements. The larger the condition number, the more sensitive the solution
of Eq. (18) will be to perturbations. Roughly speaking, if k ( A) = 10 p , we may expect to lose about p
significant figures in inverting an approximation to A. It should be noted that in order to calculate the
autoregressive parameters from Eq. (2), an inversion of the matrix on the left hand side is required. A
detailed treatment of matrix computations, norms and condition numbers is given by Stewart (1973).
The poor coefficient estimates in case of YW can be explained in terms of the condition of the
autocovariance matrix in Eq. (2) (Cybenko, 1980). If the autocovariance matrix is poorly conditioned, the
solution of Eq. (2) will strongly depend on perturbations in the coefficients Rt . The bias in the YW
autocovariance estimates R$ is one of these perturbations. Although this bias is usually too small to
t

jeopardise the parameter estimation, in case of a poorly conditioned autocovariance matrix it will be
magnified, as a result of which the YW parameter estimates will be useless.
In case of a first-order autoregressive process, the condition number reduces to unity for all pole
locations. Therefore, YW will always yield a
correct result for the parameter estimate in case
Condition number
of a first-order process. In case of second-order
6
autoregressive processes, poor YW estimates
10
may occur depending on the exact pole
locations. Two second-order autoregressive
processes are considered here, one having has
4
10
its poles on the positive real axis (I), while the
(I)
second one (II) has its poles on the positive
imaginary axis. Simulations were made using
2
10
LS, YW and Burg for poles approaching the
unit circle. The condition number of the autocovariance
matrix
can
be
calculated
0
(II)
10
theoretically as a function of the distance to the
unit circle, which is shown in Fig. 1. As the
0
-1
-2
-3
condition number in case of autoregressive
10
10
10
10
process (I) increases dramatically, it is expected
Distance to the unit circle
that YW will perform poorly if the poles are
located near the unit circle. In case of
Fig. 1 Calculated condition number of the autoautoregressive process (II), the condition
covariance matrix in case of autoregressive
processes (I) and (II).

Residual variance

Prediction error variance

10

10

10

10
YW

10

10

10

10

LS, Burg
-1

10

10

10

-1

10

-2

Distance to the unit circle

10

10
-3

YW

LS, Burg
-1

10

10

-1

10

-2

10

-3

Distance to the unit circle

Fig. 2 Simulation results for the residual and the prediction error variance in case of autoregressive process (I),
having poles on the positive real axis, using the various estimation techniques.

number equals unity for all pole distances to the unit circle. Therefore, we expect YW to perform well in
this case.
Each simulation consisted of 1024 data points, using a normally distributed purely random innovation
process having unit variance. To prevent the occurrence of close to non-stationary behaviour, the recorded
simulations were preceded with 10240 dummy simulations. Each simulation was carried out 25 times. In
each simulation, the residual and the prediction error variance as well as the first and second
autoregressive parameter were estimated, which were thereupon averaged over the number of simulations.
The residual variance was estimated from Eq. (14). The prediction error variance was estimated using a
second series of 1024 data samples. The first series served as the estimation realisation, while the second
series provided a prediction realisation in order to estimate the prediction error variance from Eq. (15).
The simulation results for the residual variance as well as the prediction error variance in case of
autoregressive process (I) are shown in Fig. 2. While LS and Burg still yield a residual variance close to
the actual value (being unity) as the poles approach the unit circle, YW is no longer able to describe the
autoregressive process correctly. Even for poles located at 0.01 from the unit circle, the residual variance
in case of YW is almost twenty times as large as in case of LS. Furthermore, it was found that the
autoregressive-parameter estimates were not accurate in case of YW. The first and second autoregressiveparameter estimates and their actual values for autoregressive process (I) are plotted in Fig. 3. The YuleWalker technique actually estimates a first-order autoregressive model, since the second autoregressiveparameter estimate approaches zero, while the first autoregressive-parameter is estimated to be its value in
a first-order model.
In case of autoregressive process (II), no such effects were found. All of the estimation techniques,
including YW, provided correct results for the residual and prediction error variance as well as for the
estimated parameters, due to the matrix condition remaining bounded. These results agree with our
expectations.
We can conclude that YW should not be used to estimate autoregressive parameters if the
autocovariance matrix is almost singular. LS should not be used either for reasons of stability of the
estimated model. The stability of an estimated autoregressive model can be verified by substituting the
estimated autoregressive parameters into Eq. (17). If there turns out to be a root lying outside the unit
circle, the estimated autoregressive model becomes invalid as the theory of autoregressive modelling is
applicable to stationary stochastic processes only. Fortunately, YW as well as Burg guarantees the
estimated model to be stable, in contrary to LS. It can furthermore be shown that slight deviations in the

First autoregressive parameter

Second autoregressive parameter

1
Actual, LS, Burg
0.8

-0.5

0.6
-1
0.4

YW
-1.5

0.2

YW

Actual, LS, Burg


-2 0
10

10

-1

10

-2

Distance to the unit circle

10

-3

0 0
10

10

-1

10

-2

10

-3

Distance to the unit circle

Fig. 3 Simulation results for the first and second autoregressive-parameter estimate in case of autoregressive
process (I), having poles on the positive real axis, using the various estimation techniques (Actual = the
actual value of the autoregressive parameter).

autoregressive-parameter estimates can result in large deviations in the estimated pole locations if the
poles of an autoregressive process are located near the unit circle (Tretter, 1976). Therefore, each slight
deviation is the parameter estimates can result in an unstable autoregressive model if the LS approach is
employed. Burg is the only reliable autoregressive-parameter estimation technique, yielding accurate
parameter estimates as well as an autoregressive model guaranteed to be stable.
We will now turn to an actual autoregressive analysis in which poor YW parameter estimation
occurred, having detrimental effects on its conclusions.
IV. AUTOREGRESSIVE ANALYSIS OF THE 1994 IAEA BENCHMARK
One of the applications of autoregressive modelling in nuclear reactor analysis is the detection of
anomalies during the reactor operation. The basic idea is to determine an autoregressive model of
measured signals in a nuclear reactor during normal operation. It is assumed that the autoregressive model
thus found will no longer be applicable in case of a disturbance of the reactor operation. This will lead to a
large prediction error variance during the anomaly, which can then be detected.
In this section, we will discuss the autoregressive analysis of the 1994 IAEA Benchmark test data aimed
at the detection of anomalies. A detailed description of this Benchmark test is provided by Journeau
(1994). Noise measurements during normal reactor operation were available, as well as synthesised noise
data that contained an anomaly during the reactor operation. As the sampling interval is not relevant for
our discussion on the performance of YW, we will use a discrete time axis. Hoogenboom and
Schoonewelle employed autoregressive analysis as described previously to determine the onset of the
anomaly (1994a, 1994b). The noise data during normal operation were used for determining the
autoregressive model of the steady-state process, while the synthesised noise data were used for anomaly
detection by spotting sudden increases in the prediction error variance. Hoogenboom and Schoonewelle
(1994b) concluded that increases in the prediction error variance due to anomalies occurred only if the
autoregressive parameters were estimated using LS. Employing YW, there is hardly any increase in the
prediction error variance during the anomaly.
These results are due to the nature of the noise data used for parameter estimation. Figure 4 shows, in
arbitrary units, a section of 300 consecutive data points of the noise signal during normal reactor operation

Power spectral density (in arbitrary units)

Noise data sample (in arbitrary units)


0.8

10

0.6

10

0.4
10

0.2
0

10

-0.2

10

-0.4
10

-0.6
-0.8
0

50

100

150

200

250

300

10

-2

-4

-6

-8
-10

-12

-14

200

Time (in arbitrary units)

400

600

800

1000

1200

Frequency ordinate

Fig. 4 Section of the noise signal during normal


reactor operation containing 300 consecutive
noise data points in arbitrary units.

Fig. 5 Estimated spectrum of the noise data in


arbitrary units as a function of the discrete
frequency ordinates.

to illustrate its almost periodic behaviour. It shows that a strong cyclical component is present, having a
period of about 60 times the sampling interval, as well as periodic components at higher frequencies.
The almost periodic behaviour of the background noise can also be demonstrated by estimating its
power spectral density, which is shown semi-logarithmically in Fig. 5. Since the spectrum contains large
peaks at specific frequency ordinates, it can be concluded that strong harmonic components are present.
Finally, the pseudo-periodic behaviour of the noise data is shown by the estimated pole locations (Fig.
6). Since the poles are all located closely to the unit circle, the autoregressive process will behave pseudoperiodically.
In this case, the pseudo-periodic behaviour of the noise data leads to a poor condition of the
autocovariance matrix. Because the autocovariance function is not available theoretically as in the
previous chapter, the matrix condition can only be estimated. The best estimate can be calculated from the
LS autocovariance matrix, because all of its estimates are unbiased. This results in

k = 9.7620 106 .

(22) Imaginary axis

This condition number is extremely large. The


matrix condition is extremely poor (compare to
Fig. 1), as a result of which poor YW parameter
estimates can be expected.
The autoregressive model was estimated from
the first 2048 data points applying LS, YW and
Burg. Using Akaikes Information Criterion
(AIC), model order p = 32 was selected. The
autoregressive-parameter estimates are given in
the Appendix. LS and Burg lead to comparable
parameter estimates, while those estimated by
YW deviate strongly. The poles of the estimated
models were always located inside the unit
circle, thereby fulfilling the condition for
stability.

0.5

-0.5

-1

-1

-0.5

0.5

Real axis
Fig. 6 Poles of the estimated autoregressive model (LS).

The residual variance var (h$ ) was estimated Table I Residual and prediction error variance in case of
the various estimation procedures
for each estimation procedure from the
Residual variance Prediction error variance
estimation realisation using Eq. (14), while the
LS
7.303910-4
8.766710-4
prediction error variance var (e$ ) was estimated
YW
1.479910-3
1.912010-3
using Eq. (15) from a prediction realisation Burg
7.310010-4
8.772010-4
containing 2048 data points. Table I shows the
results. As the residual variance and the prediction error variance in case of YW are about twice as large
as the respective figures for LS and Burg, it can be concluded that YW does not yield a correct
autoregressive model.
Applying the autoregressive model to the anomaly noise data, the anomaly can be detected from the
prediction error variance only if LS or Burgs parameter estimates are employed. In case of YW, the
anomaly cannot be detected from the prediction error variance due to the poor parameter estimates. Fig. 7
shows the prediction error for the noise section containing the anomaly in each of the cases LS, YW and
Burg. In case of LS and Burg, visual inspection of Fig. 7 enables us to locate the start of the anomaly at
the increase the prediction error variance, as indicated by the arrow. No such conclusion can be made if
the YW parameter estimates are used, which means that YW is unusable for anomaly detection.
Prediction error (LS)

Prediction error (YW)

Prediction error (Burg)

0.3
0.2
0.1
0
-0.1
-0.2

-0.3
0

500
Time

1000

500
Time

1000

500
Time

1000

Fig. 7 Prediction error for the anomaly noise data using LS, YW and Burg respectively.

V. CONCLUSION
The Yule-Walker method should not be used as a means of autoregressive-parameter estimation if the
autocovariance matrix is poorly conditioned. In that case the relatively small covariance estimate bias can
lead to a large deviation in the estimated parameters, resulting in an invalid model.
A poor autocovariance matrix condition also involves pole locations near the unit circle, as a result of
which the autoregressive process exhibits a kind of almost non-stationary, pseudo-periodic behaviour. The
variance of the stochastic process will be large due to the innovation process not being identically zero,
which is the case for a harmonic process.
The least-squares approach as well as Burgs method are still able to estimate the autoregressive model
correctly. Least squares should be used with caution though, as it does not guarantee the estimated
autoregressive model to be stable, as a result of which a small deviation in the parameter estimates may
cause the estimated poles to move outside the unit circle. In that case the estimated autoregressive model
will be invalid.
This leaves Burgs method as the most reliable estimation technique, as it provides reliable parameter
estimates as well as an estimated model guaranteed to be stable.
The preceding conclusions were obtained for univariate autoregressive analysis only. However, due to
the mathematical similarity between univariate and multivariate autoregressive analysis, we expect similar
results for the multivariate case.

ACKNOWLEDGEMENTS
These studies were made as a final report of student research work at the Department of Reactor Physics,
Interfaculty Reactor Institute, Delft University of Technology. Special thanks are due to dr. Piet Broersen (Delft
University of Technology, Faculty of Applied Physics), who introduced me to the topic of autoregressive modelling
and its pitfalls.
REFERENCES
Broersen, P.M.T. and Wensink H.E. (1993) IEEE Transactions on Signal Processing, 41, 194-204.
Cybenko, G. (1980) Society for Industrial and Applied Mathematics, Journal on Scientific and Statistical
Computing, 1, 3, 303-319.
Hoogenboom, J.E. and Schoonewelle H. (1994a) Interfaculty Reactor Institute 131-94-020.
Hoogenboom, J.E. and Schoonewelle H. (1994b) Interfaculty Reactor Institute 131-94-020/1.
Journeau, C. (1994) International Atomic Energy Agency IWGFR Extended Co-ordinated Research Program on
Acoustic Signal Processing for the Detection of Sodium Boiling or Sodium/Water Reaction in LMFBR. Data
for 1994 Benchmark Test. Commissariat lEnergie Atomique, Cadarache, France.
Oppenheim, A.V. (1978) Applications of Digital Signal Processing. Prentice-Hall, Englewood Cliffs.
Parzen, E. (1961) Technometrics, 3, 167-190.
Priestley, M.B. (1994) Spectral Analysis and Time Series. Academic Press, London.
SMORN-III (1982), Proc. of the Third Specialist Meeting on Reactor Noise, Tokyo, Japan, 26-30 October 1981,
Progress in Nuclear Energy, 9.
SMORN-IV (1985), Proc. of the Fourth Specialist Meeting on Reactor Noise, Dijon, France, 15-19 October 1984,
Progress in Nuclear Energy, 15.
SMORN-V (1988), Proc. of the Fifth Specialist Meeting on Reactor Noise, Munich, F.R.G., 12-16 October 1987,
Progress in Nuclear Energy, 21.
Stewart, G. W. (1973) Introduction to Matrix Computations. Academic Press, New York.
Tretter, S.A. (1976) Introduction to Discrete Time-Signal Processing. Wiley, New York.
APPENDIX: AUTOREGRESSIVE-PARAMETER ESTIMATES
The following table contains the autoregressive-parameter estimates for LS, YW and Burg. The parameters were
estimated using 2048 data samples from the 1994 IAEA benchmark (section IV).
Autoregressive
parameter
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16

LS
-2.5239
4.4626
-6.8731
8.8541
-10.0568
10.1852
-9.3866
8.0237
-6.4827
5.1613
-4.2741
3.6496
-3.1153
2.5708
-1.9859
1.3793

YW

Burg

-1.6270
1.8549
-2.0611
1.5264
-0.7230
-0.1009
0.5498
-0.5359
0.2793
0.0795
-0.3821
0.3384
-0.0799
-0.1456
0.2132
-0.1694

-2.5241
4.4635
-6.8729
8.8488
-10.0468
10.1662
-9.3616
7.9984
-6.4615
5.1489
-4.2768
3.6721
-3.1567
2.6278
-2.0543
1.4607

Autoregressive
parameter
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32

LS

YW

Burg

-0.8539
0.5770
-0.5165
0.5163
-0.4882
0.3595
-0.1024
-0.2577
0.6098
-0.8109
0.8691
-0.7647
0.6449
-0.2553
0.1978
-0.2009

0.0613
0.1235
-0.2018
0.0929
0.0461
-0.1229
0.1154
-0.0682
0.0009
0.1031
-0.0916
0.0246
0.1507
-0.0077
0.2802
-0.3362

-0.9391
0.6612
-0.5888
0.5718
-0.5266
0.3799
-0.1079
-0.2605
0.6198
-0.8215
0.8774
-0.7709
0.6481
-0.2580
0.1989
-0.2011

You might also like