168
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 3, MARCH 1997
Linear and Nonlinear ARMA Model Parameter
Estimation Using an Artificial Neural Network
Ki H. Chon,* Member, IEEE, and Richard J. Cohen, Member, IEEE
Abstract—This paper addresses parametric system identification of linear and nonlinear dynamic systems by analysis of the
input and output signals. Specifically, we investigate the relationship between estimation of the system using a feedforward neural
network model and estimation of the system by use of linear
and nonlinear autoregressive moving-average (ARMA) models.
By utilizing a neural network model incorporating a polynomial
activation function, we show the equivalence of the artificial
neural network to the linear and nonlinear ARMA models. We
compare the parameterization of the estimated system using
the neural network and ARMA approaches by utilizing data
generated by means of computer simulations. Specifically, we
show that the parameters of a simulated ARMA system can be
obtained from the neural network analysis of the simulated data
or by conventional least squares ARMA analysis. The feasibility
of applying neural networks with polynomial activation functions
to the analysis of experimental data is explored by application to
measurements of heart rate (HR) and instantaneous lung volume
(ILV) fluctuations.
Index Terms—ARMA model, heart rate, neural network, nonlinear, polynominal.
I. INTRODUCTION
T
HIS paper addresses the use of a feedforward artificial
neural network (ANN) for identifying linear or nonlinear
autoregressive moving average (NARMA) parameters. Many
promising aspects of neural network modeling have led to its
application to diverse fields ranging from communication [1],
to seismic signal processing [2], to biomedical engineering
[3], [4]. Only recently, the use of neural networks has been
applied to the field of system identification. Some authors have
suggested the use of recursive neural network models of the
form
(1)
is the output,
is the input, and
and
are
(where
indexes) for the identification of nonlinear dynamic systems
[5]. A recent study by Levin and Narendra has shown that
a generically observable linear system can be realized by a
Manuscript received November 1, 1995; revised October 17, 1996. This
work was supported by NASA under Grant NAGW-3927. K. H. Chon received
support from the National Institutes of Health (NIH) postdoctoral fellowship
09029. Asterisk indicates corresponding author.
*K. H. Chon is with Harvard-MIT Division of Health Sciences and
Technology, Massachusetts Institute of Technology, Cambridge, MA 02139
USA (e-mail: kchon@mit.edu).
R. J. Cohen is with Harvard-MIT Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA 02139 USA.
Publisher Item Identifier S 0018-9294(97)01469-9.
neural network input–output model. In addition, a nonlinear
system can be identified by use of multilayer feedforward
neural networks [6].
In this vein, recent works have shown the equivalence of
ANN to the Volterra series. The efficacy of ANN in estimating
Volterra kernels was illustrated by computer simulation [7], [8]
as well as by application to experimental data [9]. In particular,
polynomial basis functions have been used to represent the
activation function of hidden units for modeling a mutilayer
perceptron [10], [11] and for showing the equivalence of ANN
to the Volterra series [12]–[14]. However, this is the first study
where a polynomial function neural network is used to estimate
the parameters of autoregressive moving-average (ARMA) and
NARMA models.
Nonlinear system modeling has tended to focus on
Volterra–Wiener (VW) analysis. Although advances have been
made in algorithms for estimating more accurate VW kernels
[15], because the VW method is nonparametric it tends to
provide a noncompact model structure which is difficult
to interpret mechanistically. To overcome this limitation,
Haber and Keviczky [16], and later Billings and Leontaritis
[17], introduced parametric models of nonlinear systems
termed NARMA, that take the form of nonlinear difference
equations. Unlike VW analysis, NARMA offers compact
model representation. As a result, it has gained popularity in
recent years, most notably in the area of physiological system
modeling [18], [19].
This paper demonstrates the equivalence of ARMA and
NARMA with ANN models by demonstrating that the parameters of ARMA and NARMA models can be obtained
from analysis of ANN models trained on input-output data.
We utilized feedforward ANN (FANN) models in which
polynomials were used to represent the activation function
of the hidden units. We compare the parameters obtained by
analysis of the FANN models with the actual values of these
parameters used in simulations, and with the values of these
parameters obtained from conventional least squares analysis
using experimental instantaneous lung volume (ILV) and heart
rate (HR) physiological data.
II. METHODS
A. Neural Network
In this section we demonstrate how NARMA parameters
may be obtained from three-layer neural networks (Fig. 1)
utilizing a polynomial representation of the activation function
in the hidden units. Consider a nonlinear, time-invariant,
0018–9294/97$10.00 1997 IEEE
CHON AND COHEN: LINEAR AND NONLINEAR ARMA MODEL PARAMETER ESTIMATION
169
, written as
(4)
If the basis function in (3) is written as a polynomial function
(5)
then combining (3) and (5) yields
(6)
in (4) into (6) and gathering like terms,
Substituting the
the following expression is derived:
Fig. 1. A three-layer ANN topology. Note that weights of
are W leads and weights of y input neurons are V leads.
u input neurons
discrete-time dynamic system represented by the following
NARMA model:
(2)
(7)
and
represent the model order of the movingwhere
average (linear and nonlinear) and autoregressive (linear and
nonlinear) terms, respectively;
and
are the linear
and nonlinear moving-average (MA) terms;
and
are
the linear and nonlinear autoregressive (ARMA) terms;
are the nonlinear cross terms;
is the system output signal;
is the input signal;
is the error;
, and are
indexes. The output signal,
, from (2) may be expressed
as follows:
Note that (2) and (7) are equivalent, where the linear and
nonlinear coefficients in (2) are now represented by neural
network weight values and polynomial coefficients in (7). The
general form of th-order NARMA coefficients can be given
by
(8)
(9)
(3)
where
is a set of basis functions that include past
values of
and present and past values of
. Referring
as the weight of the coupling of
to (3), we may identify
hidden unit to the output unit,
as the number of hidden
units, and
as the weighted sum of inputs to the hidden unit
(10)
(11)
170
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 3, MARCH 1997
TABLE I
COMPARISON OF THE PFNN AND THE LEAST-SQUARES METHOD FOR
ARMA MODEL PARAMETER ESTIMATES FOR THE NOISELESS CASE
3, MA
2)
WITH THE EXACT ARMA MODEL ORDERS (AR
=
(a)
(b)
Fig. 2. Segments of (a) input and (b) output signals of (13).
(12)
Given a three-layer neural network having the topology of
Fig. 1, NARMA coefficients can be obtained from the network
provided that the network is properly trained. Although more
efficient algorithms exist for training neural networks than
backpropagation, we have utilized this algorithm since it is
the most recognized. Note that the accuracy of the linear and
nonlinear parameters in the above equations depends on the
selection of the model order and the number of hidden units
in the neural network.
Parameters
a0
True Values
PFNN
Least-Squares
0.700
0.700
0.700
a1
a2
00.400 00.100
00.400 00.100
00.401 00.100
=
b1
b2
b3
0.250
0.250
0.251
00.100
00.100
00.993
0.400
0.400
0.401
the PFNN method, the prediction obtained using the ordinary
least-squares ARMA analysis [20] was computed with the
same model order as that of the PFNN (AR
3, MA
2). As shown in Table I, both methods computed all of the
coefficients in (13) correctly.
To examine how strongly neural network topology depends
on the assumed model order, the ARMA model order was
increased from ARMA(3, 2) to ARMA(5, 4). Table II shows
the result of the PFNN for ARMA(5, 4). As expected, due to
the incorrect model order assumption, both least-squares and
the PFNN methods provide estimations of the true ARMA
model coefficients of (13) which are not exact. Although
the coefficient estimates are not exactly the same as the
true ARMA coefficients, the normalized mean-square error
(NMSE) for both methods is equally negligibly small.
The next simulation illustrates the case of NARMA parameter estimation using the PFNN method. Using the same
excitation as in (13), the following NARMA model was
utilized:
(14)
III. SIMULATIONS
In this section, a polynomial function neural network
(PFNN) model was trained on simulated data to identify
coefficients of ARMA/NARMA models. To compare the
effectiveness of this approach, parameter estimation via the
ordinary least-squares method [20] was also performed on the
simulated data.
The first test case, 1024 data points, was generated by a
linear ARMA model with AR order R 3 and MA order P
2, with the MA excitation being uncorrelated Gaussian white
noise (GWN) with a variance of one. The following linear
ARMA model was utilized:
(13)
Fig. 2 shows segments of the input signal [Fig. 2(a)] and
the corresponding ARMA model output signal [Fig. 2(b)] of
(13). For the PFNN analysis, the input and output data pair
was segmented into two 512-point data segments. The first
half of the input–output data segment was used to train the
network, and the second half was used to test the predictive
quality of the network. All of the simulations were carried
out in this manner. Two hidden units, defined by first-order
polynomial activation functions, were used for the PFNN
analysis. To facilitate comparison with the results obtained by
To completely test the features of the general NARMA model
shown in (2), (14) includes self-nonlinear input and output
terms as well as the cross-nonlinear term. The data generated
by (14) were analyzed using a PFNN model involving six
hidden units incorporating second-order polynomials. Table III
shows the results of the PFNN and the least-squares NARMA
analyses. The model order for the least-squares was set to AR
order R 3, MA order P 2, NAR (nonlinear autoregressive)
order RR 3, NMA (nonlinear moving average) order PP 2,
and the cross-nonlinear model order was set to three. As with
the linear case, both methods correctly estimated the linear
and nonlinear coefficients.
To examine the effects of assumed model order on neural
network topology, NARMA model orders of (14) were increased to AR order R 5, MA order P 4, NAR order RR
5, NMA order PP 4 and the cross-nonlinear model order
was set to five. Surprisingly, both the least-squares and the
PFNN approaches provided results equally accurate to those
shown in Table III. Increasing NARMA model orders further
(e.g., R 10, P 9, NAR 10, NMA 9, and the crossnonlinear model order
10) did not change the accuracy of
the coefficient estimates for both approaches.
To test further the effects of assumed model order on
neural network topology, a different NARMA model than that
CHON AND COHEN: LINEAR AND NONLINEAR ARMA MODEL PARAMETER ESTIMATION
171
TABLE II
COMPARISON OF THE PFNN AND THE LEAST-SQUARES METHOD FOR NARMA MODEL PARAMETER
ESTIMATES FOR THE NOISELESS CASE WITH INCORRECT ARMA MODEL ORDERS (AR
5, MA
4)
=
Parameters
a0
True Values
PFNN
Least-Squares
0.700
0.700
0.700
a1
a2
00.40
00.236
00.230
a3
00.100
00.286
00.243
0.000
00.029
a4
b1
0.000
0.250
0.015
0.006
00.013
0.001
0.006
=
b2
00.100
0.090
0.025
b3
b4
b5
0.400
0.343
0.359
0.000
0.107
0.104
0.000
00.053
00.026
TABLE III
COMPARISON OF THE PFNN AND THE LEAST-SQUARES METHOD FOR NARMA MODEL PARAMETER ESTIMATES FOR THE NOISELESS CASE
3, MA
2, NAR
2, NMA
1 AND CROSS-NONLINEAR MODEL ORDER
1)
WITH THE EXACT NARMA MODEL ORDERS (AR
=
=
=
=
=
Parameters
a0
a2
b1
b3
a(i; j )1
b(i; j )2
c(i; j )1
True Values
PFNN
Least-Squares
0.800
0.800
0.800
00.130
00.130
00.130
0.200
0.200
0.200
00.110
00.110
00.110
00.110
00.110
00.110
0.130
0.130
0.130
00.180
00.180
00.180
TABLE IV
COMPARISON OF THE PFNN AND THE LEAST-SQUARES METHOD FOR
NARMA MODEL PARAMETER ESTIMATES FOR THE NOISELESS CASE
WITH THE EXACT NARMA MODEL ORDERS (AR = 2, MA =
1, NAR = 2, NMA = 1, AND CROSS-NONLINEAR MODEL
ORDER = 1). NOTE THAT THE SAME PARAMETER ESTIMATES ARE
OBTAINED WITH THE OVERDETERMINED NARMA MODEL ORDERS
TABLE V
COMPARISON OF THE PFNN AND THE LEAST-SQUARES METHOD FOR
ARMA MODEL PARAMETER ESTIMATES FOR THE NOISE-ADDED CASE
WITH THE EXACT ARMA MODEL ORDERS (AR = 3, MA = 2)
Parameters
a1
b1
a(i; j )1
b(i; j )2
c(i; j )1
True Values
PFNN
Least-Squares
0.300
0.300
0.300
0.500
0.500
0.500
00.200
00.200
00.200
0.100
0.100
0.100
0.250
0.250
0.250
described by (14) was realized
(15)
The input excitation was the same GWN signal used in (13)
and (14). Despite increasing model order from the true model
orders of AR
2, MA
1, NAR
2, NMA
1, and
cross-nonlinear model order 2 to AR 5, MA 4, NAR
5, NMA
4, and cross-nonlinear model order
5, both
model assumptions provided accurate parameter estimates and
2, MA
1,
the result for the model assumption of AR
NAR
2, NMA
1, and cross-nonlinear model order
2
is shown in Table IV.
To test the effectiveness of the PFNN in the case of additive
noise, the processes described by (13) and (14) were again
simulated with additive GWN as follows:
(16)
where
is the additive GWN and
is the process
generated by (13) or (14). The variance of
was chosen
so that the signal-to-noise ratio (SNR) was three to one. The
model order selection for the least-squares as well as the
selection of the number of hidden units for the PFNN method
were the same as for the noiseless cases of (13) and (14). The
results of noise added to (13) for the PFNN and the leastsquares methods are shown in Table V. As shown in Table V,
both methods provide similar parameter estimates. Although
the estimated parameters
and
obtained from both the
Parameters
a0
True Values
PFNN
Least-Squares
0.700
0.718
0.720
a1
a2
00.400 00.100
00.178 00.171
00.185 00.179
b1
b2
b3
0.250
0.039
0.037
00.100
00.088
00.087
0.400
0.206
0.206
PFNN and the least-squares deviate from the actual model
coefficients, the normalized mean-square errors (NMSE) for
both methods are reasonably small, in light of the SNR of
three to one (25.86% for the PFNN and 25.73% for the leastsquares). The process of (14) with additive noise also provides
similar parameter estimates and NMSE for both methods, as
shown in Table VI. The NMSE for the PFNN and the leastsquares methods are 28.10 and 28.02%, respectively. The
NMSE value of 28.10%, for example, indicates that 28.10%
of the output response was not accounted for with the PFNN
method.
Although parameter estimation via the least-squares and
the PFNN methods generated coefficients in addition to those
represented in (14) [since the model order selected for this
process includes other parameters not represented in (14)],
we show only the estimated parameters that were originally
represented in (14). The estimates of those coefficients not
represented in the model of (14) were in all cases negligibly
small.
As shown in Tables I–VI, both methods provide similar ARMA/NARMA parameter estimates. Since the PFNN
method utilizes a backpropagation learning algorithm, the time
required to compute parameter estimates is less with the leastsquares method. With a backpropagation learning algorithm
the learning is affected by factors such as momentum, the
learning rate, and the initial value of the weight vectors.
The learning rate of the backpropagation can be made more
efficient, however, by utilizing different algorithms, such as
the conjugate backpropagation algorithm. Specifically, this
method does not require the selection of learning rate or
172
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 3, MARCH 1997
TABLE VI
COMPARISON OF THE PFNN AND THE LEAST-SQUARES METHOD FOR NARMA MODEL PARAMETER ESTIMATES FOR THE NOISE-ADDED CASE
3, MA
2, NAR
2, NMA
1 AND CROSS-NONLINEAR MODEL ORDER
1)
WITH THE EXACT NARMA MODEL ORDERS (AR
=
=
=
=
=
Parameters
a0
a2
b1
b3
a(i; j )1
b(i; j )2
c(i; j )1
True Values
PFNN
Least-Squares
0.800
0.793
0.828
00.130
00.128
00.150
0.200
0.029
0.025
00.110
00.066
00.073
00.110
00.190
00.186
0.130
0.069
0.068
00.180
00.083
00.088
momentum factors and exhibits no oscillatory behavior during
neural network training. Radial basis neural networks can
also be used to achieve faster learning rates than with the
backpropagation algorithm [21]. It should also be noted that
the learning rate of the neural network is affected by the
selection of the ARMA/NARMA model order (or the selection
of the memory length), that is, longer training of the network is
required with larger ARMA/NARMA model orders, and vice
versa.
IV. APPLICATION OF THE PFNN
TO
(a)
EXPERIMENTAL DATA
In this section we demonstrate the use of the PFNN in
analyzing experimentally obtained ILV and HR data. One
of the reasons for our interest in understanding the dynamic
relationship between ILV and HR is that HR fluctuates with
respiration. This is known as respiratory sinus arrhythmia
which has been suggested as an indicator of autonomic function [22], [23]. Over the last 20 years, various linear system
analysis methods such as the power spectrum [24], transfer
function [23], and impulse response function [25] have been
performed on ILV and HR data. In this paper the aim is not
to elucidate the physiological mechanisms involved in HR
fluctuation with respiration, but to examine if the PFNN can
provide similar impulse response functions to those published
[25].
A. Data Acquisition and Experimental Procedure
The data analyzed in this investigation were obtained from
a previously published study [23]. Experimental methods are
described in detail in [23] and will be briefly summarized. Data
collection consisted of the surface electrocardiogram (S-ECG)
and changes in ILV from five subjects. Data were collected
for 13 min for the supine position. Measurements of S-ECG
and ILV signals recorded on FM tape were sampled at 360 Hz.
Instantaneous HR at a sampling rate of 3 Hz was then obtained
using the technique described in [26]. A study has shown that
the choice of sampling rate may affect accurate detection of
the QRS complexes, especially if a low sampling rate is chosen
[27]. However, the sampling rate of 360 Hz used in this study
is high enough to allow for accurate detection of the QRS
complexes. The ILV and HR signals were decimated at the
sampling rate of 3 Hz because previous studies have shown
that dynamics of HR fluctuations are located at frequencies
below 0.5 Hz [22]–[25]. Both HR and ILV signals were
then subjected to second-degree polynomial trend removal.
For the PFNN analysis, the input and output data pair were
segmented into two 500-data-point segments. The first half of
the input–output data segment was used to train the network,
(b)
Fig. 3. Averaged impulse response functions obtained from (a) the PFNN
and (b) the least-squares ARMA method.
and the last half was used to test the predictive quality of the
network.
Most physiological systems have a purely causal relationship between input and output: the system response cannot
precede the stimulus that causes the response. However, respiratory influences on HR have been shown to have a noncausal
relationship [22], [23], [25]. This interconnection is caused
by the brainstem exerting neural control over both respiration
and HR; HR changes often lead changes in lung volume. To
compensate for the noncausal relationship, an arbitrary delay
of 3.33 s was inserted into the HR signal prior to ARMA
parameter estimation using both the PFNN and the leastsquares methods. Once the impulse response function was
obtained, the artificially-introduced delay was accounted for
by shifting the obtained impulse response function by 3.33 s.
Fig. 3 shows averaged impulse response functions (based on
five subjects) computed from the ARMA coefficients obtained
from analysis of the PFNN [Fig. 3(a)] and from the leastsquares ARMA method [Fig. 3(b)]. For both PFNN and the
least-squares ARMA methods, the model order was selected by
use of the Akaike information criterion (AIC) [28]. An ARMA
model order of (13, 12) was used for both PFNN and the leastsquares method for all five subjects. Furthermore, four hidden
units were used for the PFNN analysis. The two estimates of
the impulse response function are nearly identical, with the
dominant features being the fast positive peak followed by the
underdamped wave. Note in addition the noncausal segment
of the impulse response functions extending to approximately
2 s before time,
0. All three of these features as well as
CHON AND COHEN: LINEAR AND NONLINEAR ARMA MODEL PARAMETER ESTIMATION
the amplitude of the impulse response function obtained by the
both methods are similar to those published [22], [23], [25].
To compare the performance of the two methods quantitatively, model predictions based on the linear ARMA model
were computed for both methods using the last segment
of the input signal (ILV). Note that the first segment of
input/output data was used to estimate the coefficients of the
ARMA model for both methods. The average NMSE for the
PFNN and the least-squares methods are 38.86 and 41.04%,
respectively. Slightly better model prediction is obtained with
the PFNN than the least-squares method. Similarly, better
model prediction is also achieved with PFNN using renal blood
pressure and blood flow data [9].
V. CONCLUSIONS
We have shown that from ANN models which utilize polynomial activation functions, equivalent ARMA/NARMA models can be obtained. Computer simulations have shown that
neural networks employing polynomial activation functions
can provide accurate ARMA and NARMA model parameter
estimation. In dealing with lung volume and HR data, it
appears that a PFNN-based neural network provides slightly
better NMSE values than does the least-squares method.
Here we have dealt with single input–single output models,
but the results can be generalized to multiple input–multiple
output systems. In physiological system modeling, one is
often interested in determining separate linear and nonlinear
contributions to the overall system response. By using polynomial activation functions, linear and higher-order nonlinear
contributions to the system response can be separately identified. For example, linear contributions to the system response
can be obtained by the use of the first-order polynomial
function, and the quadratic contribution to the system response
can be obtained by the use of the second-order polynomial
function. Higher than second-order nonlinear contribution to
the system response can be easily obtained by expanding to a
higher polynomial order. The well-known nonlinear techniques
to estimate nonlinear system response such as the crosscorrelation method [29] and Laguerre expansion technique
[15] are currently limited to no more than a third-order nonlinear system model. Future research in this area may involve the
selection of the appropriate number of hidden units to provide
accurate parameter estimation. Another interesting direction
of inquiry would be to find a recursive algorithm to reduce
the computational burden associated with the neural network
so that an on-line implementation of the neural network can
be performed, for example in a clinical setting. Future research will be needed to examine the advantages/disadvantages
of identifying systems with ARMA/NARMA least squares
methods versus neural network methods.
REFERENCES
[1] B. Widrow and S. D. Stearns, Adaptive Signal Processing. Englewood
Cliffs, NJ: Prentice Hall, Inc., 1985.
[2] L. X. Wang and J. M. Mendal, “Adaptive minimum prediction error
deconvolution and source wavelet estimation using Hopfield neural
networks,” Geophys., vol. 57, pp. 670–679, 1992.
173
[3] S. Srinivasan, R. E. Gander, and H. C. Wood, “A movement pattern
generator model using artificial neural networks,” IEEE Trans. Biomed.
Eng., vol. 39, pp. 716–722, 1992.
[4] Q. Xue, Y. H. Hu, and W. J. Tompkins, “Neural-network-based adaptive
matched filtering for QRS detection,” IEEE Trans. Biomed. Eng., vol.
39, pp. 317–329, 1992.
[5] K. S. Narendra and K. Parthasarathy, “Identification and control of dynamical systems using neural networks,” IEEE Trans. Neural Networks,
vol. 1, pp. 4–27, 1990.
[6] A. U. Levin and K. S. Narendra, “Identification using feedforward
networks,” Neural Computation, vol. 7, pp. 349–357, 1995.
[7] J. Wray and G. G. R. Green, “Calculation of the Volterra kernels of
nonlinear dynamic systems using an artificial neural networks,” Biol.
Cybern., vol. 71, pp. 187–195, 1994.
[8] V. Z. Marmarelis and X. Zhao, “On the relation between Volterra models
and feedforward artificial neural networks,” in Advanced Methods of
Physiological System Modeling, vol. III, V. Z. Marmarelis, Ed. Los
Angeles, CA: Plenum, 1994, pp. 243–259.
[9] K. H. Chon, N. H. Holstein-Rathlou, D. J. Marsh, and V. Z. Marmarelis, “On the efficacy of artificial neural network analysis of renal
autoregulation in rats,” submitted for publication.
[10] K. Rohani, M.-S. Chen, and M. T. Manry, “Neural subnet design by
direct polynomial mapping,” IEEE Trans. Neural Networks, vol. 3, pp.
1024–1026, 1992.
[11] M.-S. Chen and M. T. Manry, “Conventional modeling of the mutilayer perceptron using polynomial basis function,” IEEE Trans. Neural
Networks, vol. 4, pp. 164–166, 1993.
[12] S. Oscowski and V. Q. Thanh, “Multilayer neural network structure as
Volterra filter,” IEEE Int. Symp. Circ. Syst., vol. 6, pp. 253–256, 1994.
[13] S. A. Billings and Q. M. Zhu, “Model validation tests for mutivariable
nonlinear models including neural networks,” Int. J. Contr., vol. 62, pp.
749–766, 1995.
[14] S. Chen, S. A. Billings, C. F. N. Cowan, and P. M. Grant, “Practical
identification of NARMAX models using radial basis functions,” Int. J.
Contr., vol. 52, pp. 1327–1350, 1990.
[15] V. Z. Marmarelis, “Identification of nonlinear biological systems using Laguerre expansion of kernels,” Ann. Biomed. Eng., vol. 21, pp.
573–689, 1993.
[16] R. Haber and L. Keviczky, “Identification of nonlinear dynamic systems,” in IFAC Symp. Ident. Syst. Paramet. Est., 1976, pp. 62–112.
[17] S. A. Billings and I. J. Leontaritis, “Parametric estimation technique for
nonlinear systems,” in 6th IFAC Symp. Ident. Paramet. Est., 1982, pp.
427–432.
[18] K. H. Chon, N. H. Holstein-Rathlou, D. J. Marsh, and V. Z. Marmarelis,
“Parametric and nonparametric nonlinear modeling of renal autoregulation dynamics,” in Advanced Methods of Physiological System Modeling,
vol. III, V. Z. Marmarelis, Ed. Los Angeles, CA: Plenum, 1994, pp.
195–210.
[19] M. J. Korenberg, “A robust orthogonal algorithm for system identification and time series analysis,” Biol. Cybern., vol. 60, pp. 267–276,
1989.
[20] G. Strang, Linear Algebra and Its Applications. NY: Academic, 1980.
[21] S. Chen, C. F. N. Cowan, and P. M. Grant, “Orthogonal least squares
learning algorithm for radial basis function networks,” IEEE Trans.
Biomed. Eng., vol. 2, pp. 302–309, 1991.
[22] R. D. Berger, J. P. Saul, and R. J. Cohen, “Transfer function analysis of
autonomic regulation—I: Canine atrial rate response,” Amer. J. Physiol.,
vol. 256, pp. H142–H152, 1989.
[23] J. P. Saul, R. D. Berger, M. H. Chen, and R. J. Cohen, “Transfer function
analysis of autonomic regulation—II: Respiratory sinus arrhythmia,”
Amer. J. Physiol., vol. 256, pp. H153–H161, 1989.
[24] S. Akselrod, D. Gordon, F. A. Ubel, D. C. Shannon, A. C. Barger,
and R. J. Cohen, “Power spectrum analysis of heart rate fluctuation:
A quantitative probe of beat-to-beat cardiovascular control,” Sci., vol.
213, pp. 220–222, 1981.
[25] K. Yana, J. P. Saul, R. D. Berger, M. H. Perrot, and R. J. Cohen, “A
time domain approach for the fluctuation analysis of heart rate related
to instantaneous lung volume,” IEEE Trans. Biomed. Eng., vol. 40, pp.
74–81, 1993.
[26] R. D. Berger, S. Akselrod, D. Gordon, and R. J. Cohen, “An efficient
algorithm for spectral analysis of heart rate variability,” IEEE Trans.
Biomed. Eng., vol. BME-33, pp. 900–904, 1986.
[27] M. Merri, D. C. Farden, J. G. Mottley, and E. L. Titlebaum, “Sampling
frequency of the electrocardiogram for spectral analysis of the heart
rate variability,” IEEE Trans. Biomed. Eng., vol. 37, pp. 99–106,
1990.
[28] H. Akaike, “Power spectrum estimation through autoregressive model
fitting,” Ann. Instrum. Stat. Math., vol. 21, pp. 407–419, 1969.
174
IEEE TRANSACTIONS ON BIOMEDICAL ENGINEERING, VOL. 44, NO. 3, MARCH 1997
[29] Y. W. Lee and M. Schetzen, “Measurement of the Wiener kernels
of a nonlinear system by cross-correlation,” Int. J. Contr., vol. 2, pp.
237–254, 1965.
Ki H. Chon (M’96) received the B.S. degree
in electrical engineering from the University of
Connecticut, Storrs, the M.S. degree in biomedical
engineering from the University of Iowa, Iowa City,
and the M.S. degree in electrical engineering and
the Ph.D. degree in biomedical engineering from
the University of Southern California, Los Angeles.
Currently, he is a Post-Doctoral Fellow at the
Harvard-Massachusetts Institute of Technology
(MIT) division of Health Sciences and Technology,
Cambridge. His current research interests include
biomedical signal processing and identification and modeling of physiological
systems.
Richard J. Cohen (M’85) was born in 1951 in
Boston, MA. He received the B.S. degree in chemistry and physics from Harvard University, Cambridge, MA, in 1971, the Ph.D. degree in physics
from the Massachusetts Institute of Technology,
Cambridge, in 1976, and the M.D. degree from
Harvard Medical School in 1976. He completed
clinical training in internal medicine at the Brigham
and Women’s Hospital in 1979.
He is currently a Professor in the Harvard Division of Health Sciences and Technology and Director of the NASA Center for Quantitative Cardiovascular Physiology,
Modeling, and Data analysis. His research interests include cardiovascular
physiology and macromolecular interactions.