Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
0% found this document useful (0 votes)
4 views14 pages

NOISE

Download as pdf or txt
Download as pdf or txt
Download as pdf or txt
You are on page 1/ 14

Digital Signal Processing 20 (2010) 664–677

Contents lists available at ScienceDirect

Digital Signal Processing


www.elsevier.com/locate/dsp

Gradient based and least-squares based iterative identification methods


for OE and OEMA systems ✩
Feng Ding a,∗ , Peter X. Liu b , Guangjun Liu c
a
School of Communication and Control Engineering, Jiangnan University, Wuxi 214122, PR China
b
Department of Systems and Computer Engineering, Carleton University, Ottawa, Canada K1S 5B6
c
Department of Aerospace Engineering, Ryerson University, Toronto, Canada M5B 2K3

a r t i c l e i n f o a b s t r a c t

Article history: Gradient based and least-squares based iterative identification algorithms are developed
Available online 20 November 2009 for output error (OE) and output error moving average (OEMA) systems. Compared with
recursive approaches, the proposed iterative algorithms use all the measured input–output
Keywords:
data at each iterative computation (at each iteration), and thus can produce highly accurate
Recursive identification
Parameter estimation
parameter estimation. The basic idea of the iterative methods is to adopt the interactive
Least squares estimation theory: the parameter estimates relying on unknown variables are computed
Stochastic gradient by using the estimates of these unknown variables which are obtained from the preceding
Iterative algorithms parameter estimates. The simulation results confirm theoretical findings.
Missing data © 2009 Elsevier Inc. All rights reserved.
Output error (OE) models
Output error moving average (OEMA)
models

1. Introduction

Parameter estimation is the eternal theme of system identification and many identification methods were born for
decades [1–4], e.g., the multi-innovation stochastic gradient algorithms for linear regression models [5] and for pseudo-
linear regression models [6], the auxiliary model based recursive least squares algorithm [7] and auxiliary model based
stochastic gradient algorithm [8] for dual-rate sampled-data systems, the hierarchical least squares algorithm [9], hierarchi-
cal stochastic gradient algorithm [10] and multi-innovation stochastic gradient algorithm [3] for multi-input multi-output
systems.
Many estimation algorithms can identify the parameters of systems in present of noises, including the recursive least
squares (LS) [11,12], extended LS [11,13,14], output error methods [15,16], bias correction or bias elimination LS [17–23],
etc. Most of these contributions, however, employ the recursive identification and few consider iterative identification with
a finite data batch. (In Section 3, we will show the slight difference of the words iterative and recursive in this paper.)
Therefore, a natural question is: whether there exist iterative algorithms which are able to make sufficient use of all the
measured information so that the accuracy of parameter estimation can be improved. The answer is yes.
Based on the work in [24], this paper is to develop highly accurate system identification algorithms using the iterative
techniques. We frame our study in the identification of output error (OE) and output error moving average (OEMA) systems
for which the information vector in the corresponding identification models contains unknown variables, which is the main


This work was supported in part by the Natural Sciences and Engineering Research Council of Canada (NSERC), and in part by the Canada Research
Chair Program. This work was also partially supported by the National Natural Science Foundation of China under Grant 60874020. This research was
completed while the first author visited the Carleton University (Ottawa, Canada) and Ryerson University (Toronto, Canada).
*
Corresponding author.
E-mail addresses: fding@jiangnan.edu.cn (F. Ding), xpliu@sce.carleton.ca (P.X. Liu), gjliu@ryerson.ca (G. Liu).

1051-2004/$ – see front matter © 2009 Elsevier Inc. All rights reserved.
doi:10.1016/j.dsp.2009.10.012
F. Ding et al. / Digital Signal Processing 20 (2010) 664–677 665

Fig. 1. A system described by the output error model.

difficulty of identification. The fundamental idea of the proposed methods is to use the iterative techniques to deal with
the identification problem of the OE and OEMA systems by adopting the interactive estimation theory: when computing
the parameter estimates, the unknown variables in the information vector are replaced with their corresponding estimates
at the current iteration, and these estimates of the unknown variables are again computed by the preceding parameter
estimates (this is a hierarchical computation process). Based on this idea, we present a gradient based iterative and a least-
squares based iterative identification algorithms. The main advantage of such iterative algorithms is that they can produce
more accurate parameter estimation than existing recursive methods, e.g., [15,16]. Compared with the bias compensation
(bias correction or bias elimination) least squares which are also suitable for the output error systems [17,19], the iterative
algorithms in this paper can produce highly accurate estimation. In addition, the iterative algorithms in this paper can be
extended to other models such as ARMAX and Box–Jenkins models.
The paper is organized as follows. Section 2 simply introduces the output error methods. The iterative identification
algorithms for OE systems are derived in Section 3. Section 4 extends the iterative algorithms to OEMA systems and presents
a gradient based iterative and a least-squares based iterative identification algorithms with the idea of auxiliary model.
Section 5 gives an illustrative example for the results. Finally, concluding remarks are given in Section 6.

2. The pseudo-linear regression or output error methods

Consider the output error system [12,15–18], depicted in Fig. 1,


B ( z)
y (t ) = u (t ) + v (t ), (1)
A ( z)
where {u (t )} and { y (t )} are the input and output sequences of the system, respectively, { v (t )} is a white noise sequence
with zero mean, and A ( z) and B ( z) are the polynomials, of known orders (na , nb ), in the unit backward shift operator z−1
[i.e., z−1 y (t ) = y (t − 1)], defined by

A ( z) = 1 + a1 z−1 + a2 z−2 + · · · + ana z−na ,


B ( z) = b1 z−1 + b2 z−2 + · · · + bnb z−nb .
The goal is to develop iterative algorithms to estimate the unknown parameters (ai , b i ) by using the available input–
output measurement data {u (t ), y (t ): t = 0, 1, 2, . . . , L }, and to improve the accuracy of parameter estimation. Without loss
of generality, assume that u (t ) = 0, y (t ) = 0 and v (t ) = 0 as t  0.
In order to show the advantages of the iterative identification methods we will propose, the following is simply to discuss
the comparable pseudo-linear regression algorithm [12] or output error algorithm [15,16], i.e., the auxiliary model based
recursive least squares algorithm [7] and auxiliary model based stochastic gradient algorithm [8] for output error models.
Let
B ( z)
x(t ) := u (t ), or A ( z)x(t ) = B ( z)u (t ). (2)
A ( z)
Define the parameter vector θ and information vector ϕ (t ) as

θ := [a1 , a2 , . . . , ana , b1 , b2 , . . . , bnb ]T ∈ Rna +nd ,


 T
ϕ (t ) := −x(t − 1), −x(t − 2), . . . , −x(t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) ∈ Rna +nd . (3)
Then (2) can be written as

x(t ) = ϕ T (t )θ . (4)
Using (2) and (4), Eq. (1) gives

y (t ) = x(t ) + v (t ) = ϕ T (t )θ + v (t ). (5)
The available measurement data are {u (t ), y (t ): t = 0, 1, 2, . . . , L }. From Fig. 1, x(t ) is the inner variable and thus unmea-
sured. That is, x(t − i ) in the information vector ϕ (t ) is unknown, so the standard least squares or stochastic gradient
methods cannot estimate directly the parameter vector θ in (5).
666 F. Ding et al. / Digital Signal Processing 20 (2010) 664–677

Let θ̂ (t ) be the recursive estimate of θ at time t, x̂(t ) be the estimate of x(t ) (i.e., the output of the auxiliary model) and
define
 T
ϕ̂ (t ) := −x̂(t − 1), −x̂(t − 2), . . . , −x̂(t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) ∈ Rna +nb .
Then minimizing the cost function:
 T 2 
J 1 (θ) := E y (t ) − ϕ̂ (t )θ

yields the following auxiliary model based stochastic gradient (AM-SG) algorithm [8,11]:

ϕ̂ (t )  T 
θ̂(t ) = θ̂(t − 1) + y (t ) − ϕ̂ (t )θ̂(t − 1) , (6)
r (t )
 2
r (t ) = r (t − 1) + ϕ̂ (t ) , r (0) = 1, (7)
 T
ϕ̂ (t ) = −x̂(t − 1), −x̂(t − 2), . . . , −x̂(t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) , (8)
T
x̂(t ) = ϕ̂ (t )θ̂ (t ), t = 1, 2, . . . , L . (9)

Minimizing the cost function:


t
 T 2
J 2 (θ) := y (i ) − ϕ̂ (i )θ
i =1

leads to the following auxiliary model based recursive least squares (AM-RLS) algorithm [7,12]:
 T 
θ̂(t ) = θ̂(t − 1) + P (t )ϕ̂ (t ) y (t ) − ϕ̂ (t )θ̂(t − 1) , (10)
T
P (t − 1)ϕ̂ (t )ϕ̂ (t ) P (t − 1)
P (t ) = P (t − 1) − T
, (11)
1 + ϕ̂ (t ) P (t − 1)ϕ̂ (t )
 T
ϕ̂ (t ) = −x̂(t − 1), −x̂(t − 2), . . . , −x̂(t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) , (12)
T
x̂(t ) = ϕ̂ (t )θ̂ (t ), t = 1, 2, . . . , L . (13)

The auxiliary model (or reference model) based stochastic gradient identification algorithm in (6)–(9) is also called the
pseudo-linear regression stochastic gradient algorithm and the auxiliary model (or reference model) based recursive least
squares identification algorithm in (10)–(13) is also called the pseudo-linear regression least squares algorithm [12,15,16].
To initialize these two algorithms, we take θ̂ (0) to be a small real vector, e.g., θ̂(0) = 1n / p 0 with 1n being an n-
dimensional column vector whose elements are all 1 and with p 0 normally a large positive number (e.g., p 0 = 106 ), and
P (0) = p 0 I with I representing an identity matrix of appropriate dimension.

3. The iterative algorithms

In general, the recursive algorithms are suitable for on-line identification and the iterative algorithms are used for off-
line identification. The iterative algorithm employs the idea of updating the estimate θ̂ using a fixed data batch with a finite
length L. In this paper, in order to distinguish on-line from off-line calculation, we use iterative with subscript k, e.g., θ̂ k to
be given later, for off-line algorithms, and recursive with no subscript, e.g., θ̂(t ) before, for on-line ones [25].
For finite measurement input–output data {u (t ), y (t ): t = 0, 1, 2, . . . , L }, iterative identification methods are also very
useful and required in order to improve parameter estimation accuracy, which is the motivation of studying iterative iden-
tification of this paper.
Although the above AM-SG and AM-RLS algorithms can generate the parameter estimates of the output error model in
(5), their main drawback is that when computing the parameter estimate vector θ̂ (t ) until time t (1  t  L), the algorithms
use only the measured data {u (i ), y (i ): i = 0, 1, 2, . . . , t } but do not use the data {u (i ), y (i ): i = t + 1, t + 2, . . . , L }. So,
we were wondering whether there would be new iterative algorithms that are able to take sufficient advantage of all the
measured data {u (i ), y (i ): i = 0, 1, 2, . . . , t , . . . , L }, at each iteration so that the parameter estimation accuracy can be greatly
improved. The answer is yes. Next, we investigate the iterative identification approaches for the output error systems. Note
that we use θ̂ k as the estimate of θ in iterative algorithms (k being the iterative variable) and use θ̂ (t ) as the estimate of θ
in recursive algorithms as is above in this paper.

3.1. The gradient based iterative algorithm

Define the stacked output vector Y ( L ), stacked information matrix Φ( L ) and white noise vector V ( L ) as
F. Ding et al. / Digital Signal Processing 20 (2010) 664–677 667

 T
Y ( L ) := y ( L ), y ( L − 1), . . . , y (1) ∈ RL , (14)
 T
Φ( L ) := ϕ ( L ), ϕ ( L − 1), . . . , ϕ (1) ∈ R L ×(na +nb ) , (15)
 T
V ( L ) = v ( L ), v ( L − 1), . . . , v (1) ∈ R L . (16)
Note that Y ( L ) and Φ( L ) contain all the measured data {u (t ), y (t ): t = 0, 1, 2, . . . , L }. From (5), (14) to (16), we have
Y ( L ) = Φ( L )θ + V ( L ). (17)
Notice that V ( L ) is a white noise vector with zero mean and define a quadratic criterion function [12],
 2
J 3 (θ) := Y ( L ) − Φ( L )θ  . (18)

Let k = 1, 2, 3, . . . be an iteration variable, and θ̂ k be the iterative estimate of θ . For the optimization problem in (18),
minimizing J 3 (θ ) using the negative gradient search leads to the iterative algorithm of computing θ̂ k as follows:
μk  
θ̂ k = θ̂ k−1 − grad J 3 (θ̂ k−1 )
2
 
= θ̂ k−1 + μk Φ T ( L ) Y ( L ) − Φ( L )θ̂ k−1 , (19)
where μk is the iterative step-size or convergence factor to be given later. Here, a difficulty arises because Φ( L ) in (19) [that
is, ϕ (t ), t = L , L − 1, . . . , 1] contains unknown inner variable x(t − i ) and so it is impossible to compute the iterative solution
θ̂ k of θ by (19). In order to solve this difficulty, the approach here is based on the hierarchical identification principle [9,10]
(or called the interactive estimation theory [26]). Let x̂k (t − i ) be the estimate of x(t − i ) at iteration k, and ϕ̂ k (t ) denote the
information vector obtained by replacing x(t − i ) in (3) with x̂k−1 (t − i ), i.e.,
 T
ϕ̂ k (t ) := −x̂k−1 (t − 1), −x̂k−1 (t − 2), . . . , −x̂k−1 (t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) ∈ Rna +nb . (20)

Replacing ϕ (t ) and θ in (4) with ϕ̂ k (t ) and θ̂ k , respectively, the estimate x̂k (t ) of x(t ) can be computed by
T
x̂k (t ) = ϕ̂ k (t )θ̂ k . (21)
Define
⎡ ⎤
ϕ̂ kT ( L )
⎢ ϕ̂ T ( L − 1) ⎥
⎢ k ⎥
Φ̂ k ( L ) := ⎢ .. ⎥ ∈ R L ×(na +nb ) . (22)
⎣ . ⎦
ϕ̂ kT (1)
Replacing Φ( L ) in (19) with Φ̂ k ( L ) yields the following gradient based iterative identification algorithm for estimating θ :
T  
θ̂ k = θ̂ k−1 + μk Φ̂ k ( L ) Y ( L ) − Φ̂ k ( L )θ̂ k−1 , k = 1, 2, 3, . . . . (23)
Or
 T  T
θ̂ k = I − μk Φ̂ k ( L )Φ̂ k ( L ) θ̂ k−1 + μk Φ̂ k ( L )Y ( L ).
T
In order to guarantee the convergence of θ̂ k , all eigenvalues of [ I − μk Φ̂ k ( L )Φ̂ k ( L )] have to be inside the unit circle. One
conservative choice of μk is to satisfy
2
0 < μk  T
. (24)
λmax [Φ̂ k ( L )Φ̂ k ( L )]
Eqs. (20)–(24) are referred to as the gradient based iterative identification algorithm for output error systems (which is
abbreviated as the OE-GI algorithm) and can be summarized as follows:
T  
θ̂ k = θ̂ k−1 + μk Φ̂ k ( L ) Y ( L ) − Φ̂ k ( L )θ̂ k−1 , k = 1, 2, 3, . . . , (25)
 T
Y ( L ) := y ( L ), y ( L − 1), . . . , y (1) , (26)
 T
Φ̂ k ( L ) = ϕ̂ k ( L ), ϕ̂ k ( L − 1), . . . , ϕ̂ k (1) , (27)
 T
ϕ̂ k (t ) = −x̂k−1 (t − 1), −x̂k−1 (t − 2), . . . , −x̂k−1 (t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) , (28)
T
x̂k (t ) = ϕ̂ k (t )θ̂ k , t = 1, 2, . . . , L , (29)
2
0 < μk  T
. (30)
λmax [Φ̂ k ( L )Φ̂ k ( L )]
668 F. Ding et al. / Digital Signal Processing 20 (2010) 664–677

The OE-GI algorithm in (25)–(30) adopts the idea of updating the estimate θ̂ using a fixed data batch with the data
length L at each iteration, and thus has higher parameter estimation accuracy than the AM-SG algorithm.
To summarize, we list the steps involved in the OE-GI algorithm to compute θ̂ k as k increases:

1. Collect the input–output data {u (t ), y (t ): t = 0, 1, 2, . . . , L } and form Y ( L ) by (26).


2. To initialize, let k = 1, θ̂ 0 = 1n / p 0 , x̂0 (t ) = 1/ p 0 for all t, p 0 = 106 and n := dim θ .
3. Form ϕ̂ k (t ) and Φ̂ k (t ) by (28) and (27), respectively.
4. Choose μk according to (30), and update the estimate θ̂ k by (25).
5. Compute x̂k (t ) by (29).
6. Compare θ̂ k with θ̂ k−1 : if they are sufficiently close, or for some pre-set small ε , if

θ̂ k − θ̂ k−1   ε ,

then terminate the procedure and obtain the iterative times k and estimate θ̂ k ; otherwise, increase k by 1 and go to
step 3.

3.2. The least-squares based iterative algorithm

Another iterative algorithm is the least-squares based iterative algorithm to be discussed in the following.
Provided that ϕ (t ) is persistently exciting, minimizing J 3 (θ̂ ) in (18) gives the least-squares estimate of θ :
  −1
θ̂ = Φ T ( L )Φ( L ) Φ T ( L )Y ( L ). (31)

However, a similar difficulty arises in that Φ( L ) in (31) [that is, ϕ (t ), t = L , L − 1, . . . , 1] contains the unknown inner
variable x(t − i ) and so the estimate of θ is also impossible to compute by (31). A similar way to that of deriving the OE-GI
algorithm can result in the following least-squares based iterative identification algorithm for estimating θ of the output
error systems, which is abbreviated as the OE-LSI algorithm:

 T  −1 T
θ̂ k = Φ̂ k ( L )Φ̂ k ( L ) Φ̂ k ( L )Y ( L ), (32)
 T
Y ( L ) = y ( L ), y ( L − 1), . . . , y (1) , (33)
 T
Φ̂ k ( L ) = ϕ̂ k ( L ), ϕ̂ k ( L − 1), . . . , ϕ̂ k (1) , (34)
 T
ϕ̂ k (t ) = −x̂k−1 (t − 1), −x̂k−1 (t − 2), . . . , −x̂k−1 (t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) , (35)
T
x̂k (t ) = ϕ̂ k (t )θ̂ k , t = 1, 2, . . . , L . (36)

Like the OE-GI algorithm, the OE-LSI algorithm also makes full use of all the measured input–output data {u (i ), y (i ): i =
0, 1, 2, . . . , t , . . . , L } at each iteration k = 1, 2, 3, . . . and thus has higher parameter estimation accuracy than the AM-RLS
algorithm.
The steps of computing the estimate θ̂ k in the OE-LSI algorithm is as follows.

1. Collect the input–output data {u (t ), y (t ): t = 0, 1, 2, . . . , L } and form Y ( L ) by (33).


2. To initialize, let k = 1, x̂0 (t ) = random number for all t.
3. Form ϕ̂ k (t ) and Φ̂ k (t ) by (35) and (34), respectively.
4. Update the estimate θ̂ k by (32).
5. Compute x̂k (t ) by (36).
6. Compare θ̂ k with θ̂ k−1 : if they are sufficiently close, or for some pre-set small ε , if

θ̂ k − θ̂ k−1   ε ,

then terminate the procedure and obtain the iterative times k and estimate θ̂ k ; otherwise, increase k by 1 and go to
step 3.

The flowchart of computing the OE-LSI parameter estimate θ̂ k is shown in Fig. 2.


The iterative algorithms are suitable for off-line identification and can improve estimation accuracy, compared with the
recursive algorithms which can be used for on-line identification. Although the iterative algorithms have large computational
efforts, this increased computation is still tolerable and affordable.
F. Ding et al. / Digital Signal Processing 20 (2010) 664–677 669

Fig. 2. The flowchart of computing the OE-LSI parameter estimate θ̂ k .

Fig. 3. A system described by the OEMA model.

4. The iterative algorithms for OEMA systems

This section considers identification problems of the output error moving average (OEMA) systems [24], depicted in
Fig. 3,
B ( z)
y (t ) = u (t ) + D ( z) v (t ), (37)
A ( z)
where w (t ) := D ( z) v (t ) is a moving average noise model and

D ( z) = 1 + d1 z−1 + d2 z−2 + · · · + dnd z−nd .


Referring to Fig. 3, define the middle variable,
B ( z)
x(t ) := u (t ). (38)
A ( z)
From (37), we have

y (t ) = x(t ) + D ( z) v (t ). (39)
Define the parameter vectors:
670 F. Ding et al. / Digital Signal Processing 20 (2010) 664–677

 
ϑs
ϑ := ∈ Rna +nb +nd ,
ϑn
ϑ s := [a1 , a2 , . . . , ana , b1 , b2 , . . . , bnb ]T ∈ Rna +nb ,
ϑ n := [d1 , d2 , . . . , dnd ]T ∈ Rnd ,
and the information vectors:
 
ϕ s (t )
ϕ (t ) := ∈ Rna +nb +nd ,
ϕ n (t )
 T
ϕ s (t ) := −x(t − 1), −x(t − 2), . . . , −x(t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) ∈ Rna +nb ,
 T
ϕ n (t ) := v (t − 1), v (t − 2), . . . , v (t − nd ) ∈ Rnd ,
where subscripts s and n are the first letters of words system and noise, respectively. Eqs. (38)–(39) can be written as

x(t ) = ϕ Ts (t )ϑ s , (40)
T T
y (t ) = ϕ s (t )ϑ s +ϕ n (t )ϑ n + v (t )
T
= ϕ (t )ϑ + v (t ). (41)
Eq. (41) is the identification model for the OEMA systems and the parameter vector ϑ contains all the parameters of the
systems.

4.1. The gradient based iterative algorithm

The following is to derive a gradient based iterative algorithm for OEMA systems.
Consider the newest p data and define the stacked output vector Y (t ), stacked information matrix Φ(t ) and white noise
vector V (t ) as
⎡ y (t )

⎢ y (t − 1) ⎥
Y (t ) = ⎢
⎣ .. ⎥ ∈ Rp ,
⎦ (42)
.
y (t − p + 1)
⎡ ϕ T (t ) ⎤
⎢ ϕ (t − 1) ⎥
T
Φ(t ) = ⎢
⎣ .. ⎥∈R

p ×(na +nb +nd )
, (43)
.
ϕ (t − p + 1)
T
⎡ v (t )

⎢ v (t − 1) ⎥
V (t ) = ⎢
⎣ .. ⎥ ∈ Rp .
⎦ (44)
.
v (t − p + 1)
Here, if we set p = L and t = L, then Y (t ) and Φ(t ) contain all the measured data available. From (41), (42) to (44), we
have
Y (t ) = Φ(t )ϑ + V (t ). (45)
Notice that V (t ) is a white noise vector with zero mean and define a quadratic criterion function [12],
 2
J 4 (ϑ) = Y (t ) − Φ(t )ϑ  . (46)

Let k = 1, 2, 3, . . . be an iteration variable, and ϑ̂ k (t ) be the iterative estimate of ϑ . For the optimization problem in (46),
minimizing J 4 (ϑ) using the negative gradient search leads to the iterative algorithm of computing ϑ̂ k (t ) as follows:
μk (t )   
ϑ̂ k (t ) = ϑ̂ k−1 (t ) − grad J 4 ϑ̂ k−1 (t )
2
 
= ϑ̂ k−1 (t ) + μk (t )Φ T (t ) Y (t ) − Φ(t )ϑ̂ k−1 (t ) , (47)
where μk (t ) is the iterative step-size or convergence factor to be given later. Here, a difficulty arises because Φ(t ) in (47)
[that is ϕ (t )] contains unknown inner variables x(t − i ) and noise terms v (t − i ), so it is impossible to compute the iterative
solution ϑ̂ k (t ) of ϑ . Similarly, the solution here is based on the hierarchical identification principle [9,10]. Let x̂k (t − i ) and
v̂ k (t − i ) be the estimate of x(t − i ) and v (t − i ) at iteration k, respectively, and define
F. Ding et al. / Digital Signal Processing 20 (2010) 664–677 671

 
ϕ̂ s,k (t )
ϕ̂ k (t ) := ∈ Rna +nb +nd , (48)
ϕ̂ n,k (t )
 T
ϕ̂ s,k (t ) := −x̂k−1 (t − 1), −x̂k−1 (t − 2), . . . , −x̂k−1 (t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) ∈ Rna +nb , (49)
 T
ϕ̂ n,k (t ) := v̂ k−1 (t − 1), v̂ k−1 (t − 2), . . . , v̂ k−1 (t − nd ) ∈ Rnd . (50)
 ϑ̂ s,k (t )  ϑ 
Let ϑ̂ k (t ) := be the estimate of ϑ = ϑ s at iteration k and time t. Replacing ϕ s (t ) and ϑ s in (40) with ϕ̂ s,k (t ) and
ϑ̂ n,k (t ) n

ϑ̂ s,k (t ), respectively, we can compute the estimate x̂k (t ) by


T
x̂k (t ) = ϕ̂ s,k (t )ϑ̂ s,k (t ). (51)

Similarly, from (41), the estimate v̂ k (t ) can be computed by


T
v̂ k (t ) = y (t ) − ϕ̂ k (t )ϑ̂ k (t ). (52)
Define
⎡ ⎤
ϕ̂ kT (t )
⎢ ϕ̂ kT (t − 1) ⎥
⎢ ⎥ p ×(na +nb +nd )
Φ̂ k (t ) = ⎢ .. ⎥∈R . (53)
⎣ . ⎦
T
ϕ̂ k (t − p + 1)
Replacing Φ(t ) in (47) with Φ̂ k (t ) yields the gradient based iterative identification algorithm for estimating ϑ :
T  
ϑ̂ k (t ) = ϑ̂ k−1 (t ) + μk (t )Φ̂ k (t ) Y (t ) − Φ̂ k (t )ϑ̂ k−1 (t ) . (54)
Or
 T  T
ϑ̂ k (t ) = I − μk (t )Φ̂ k (t )Φ̂ k (t ) ϑ̂ k−1 (t ) + μk (t )Φ̂ k (t )Y (t ).
T
Similarly, in order to guarantee the convergence of ϑ̂ k (t ), the matrix [ I − μk (t )Φ̂ k (t )Φ̂ k (t )] has all eigenvalues inside the
unit circle. One conservative choice of μk (t ) is to satisfy
2
0 < μk (t )  T
. (55)
λmax [Φ̂ k (t )Φ̂ k (t )]
Eqs. (48)–(55) form the gradient based iterative identification algorithm for OEMA models, which is abbreviated as the
OEMA-GI algorithm.

4.2. The least-squares based iterative algorithm

Another iterative algorithm is the least-squares based iterative algorithm for OEMA systems.
Provided that ϕ (t ) is persistently exciting, minimizing J 4 (ϑ̂) in (46) gives the least-squares estimate of ϑ :
 −1
ϑ̂(t ) = Φ T (t )Φ(t ) Φ T (t )Y (t ). (56)
However, a similar difficulty arises in that Φ(t ) in (56) [that is, ϕ (t )] contains unknown inner variables x(t − i ) and v (t − i )
and so the estimate of ϑ is also impossible to compute by (56). A similar way to that of deriving the OE-GI algorithm can
result in the following least-squares based identification algorithm for estimating ϑ :
 T  −1 T
ϑ̂ k (t ) = Φ̂ k (t )Φ̂ k (t ) Φ̂ k (t )Y (t ), k = 1, 2, . . . , (57)
 T
Φ̂ k (t ) = ϕ̂ k (t ), ϕ̂ k (t − 1), . . . , ϕ̂ k (t − p + 1) , (58)
 T
Y (t ) = y (t ), y (t − 1), . . . , y (t − p + 1) , (59)
   
ϕ̂ s,k (t ) ϑ̂ s,k (t )
ϕ̂ k (t ) = , ϑ̂ k (t ) = , (60)
ϕ̂ n,k (t ) ϑ̂ n,k (t )
 T
ϕ̂ s,k (t ) = −x̂k−1 (t − 1), −x̂k−1 (t − 2), . . . , −x̂k−1 (t − na ), u (t − 1), u (t − 2), . . . , u (t − nb ) , (61)
 T
ϕ̂ n,k (t ) = v̂ k−1 (t − 1), v̂ k−1 (t − 2), . . . , v̂ k−1 (t − nd ) , (62)
T
x̂k (t ) = ϕ̂ s,k (t )ϑ̂ s,k (t ), (63)
T
v̂ k (t ) = y (t ) − ϕ̂ k (t )ϑ̂ k (t ). (64)
672 F. Ding et al. / Digital Signal Processing 20 (2010) 664–677

Fig. 4. The flowchart of computing the OEMA-LSI parameter estimate ϑ̂ k (t ).

Eqs. (57)–(64) are referred to as the least-squares based iterative identification algorithm for OEMA systems, which is ab-
breviated as the OEMA-LSI algorithm. Like the OEMA-GI algorithm, the OEMA-LSI algorithm also makes full use of all the
measured input–output data {u (i ), y (i ): i = 0, 1, 2, . . . , t , . . . , L } at each iteration k = 1, 2, 3, . . . and thus has higher parameter
estimation accuracy.
To summarize, we list the steps involved in the OEMA-LSI algorithm to compute ϑ̂ k (t ) as k increases:

1. Let t = p and determine p, collect {u (i ), y (i ): i = 1, 2, . . . , p − 1}, set x̂0 (t − i ) = random number and v̂ 0 (t − i ) =
random number.
2. To initialize, let k = 1. Collect the input–output data u (t ) and y (t ) and form Y (t ) by (59).
3. Form ϕ̂ s,k (t ), ϕ̂ n,k (t ), ϕ̂ k (t ) and Φ̂ k (t ) by (61), (62), (60) and (58), respectively.
4. Update the estimate ϑ̂ k (t ) by (57).
5. Compute x̂k (t ) by (63) and v̂ k (t ) by (64).
6. Compare ϑ̂ k (t ) with ϑ̂ k−1 (t ): if they are sufficiently close, or for some pre-set small ε , if
 
ϑ̂ k (t ) − ϑ̂ k−1 (t )  ε ,

then obtain the iterative times k and estimate ϑ̂ k (t ), increase t by 1 and go to step 2; otherwise, increase k by 1 and
go to step 3.

The flowchart of computing the OEMA-LSI estimate ϑ̂ k (t ) is shown in Fig. 4.

5. Example

In this section, an example is given to show that the iterative algorithms can improve the parameter estimation accuracy,
compared with their corresponding recursive algorithms.
F. Ding et al. / Digital Signal Processing 20 (2010) 664–677 673

Table 1
The AM-RLS parameter estimates and errors (σ 2 = 0.302 , δns = 25.47%).

t=L a1 a2 b1 b2 δ (%)
1000 −0.40206 0.30766 0.67491 0.63269 1.10073
2000 −0.41043 0.30734 0.67969 0.63435 0.44297
3000 −0.41292 0.31639 0.68020 0.62965 0.70487

True values −0.41200 0.30900 0.68040 0.63030

Table 2
The AM-RLS parameter estimates and errors (σ 2 = 1.002 , δns = 84.91%).

t=L a1 a2 b1 b2 δ (%)
1000 −0.36791 0.28976 0.66244 0.64660 5.07884
2000 −0.40137 0.29667 0.67831 0.64769 2.25385
3000 −0.41258 0.32997 0.67987 0.62985 1.97809

True values −0.41200 0.30900 0.68040 0.63030

Table 3
The OE-LSI parameter estimates and errors (L = 1000, σ 2 = 0.302 , δns = 25.47%).
k a1 a2 b1 b2 δ (%)
1 0.01364 0.03812 0.67945 0.90896 54.32928
2 −0.34488 0.21487 0.67058 0.67849 11.84229
3 −0.41482 0.31781 0.67361 0.63104 1.08376
4 −0.39317 0.29833 0.67466 0.64329 2.43963
5 −0.39822 0.29968 0.67444 0.63969 1.88650
6 −0.39881 0.30111 0.67448 0.63936 1.77212
7 −0.39832 0.30063 0.67449 0.63965 1.83640
8 −0.39843 0.30068 0.67448 0.63958 1.82411
9 −0.39844 0.30070 0.67448 0.63958 1.82292
10 −0.39843 0.30069 0.67448 0.63959 1.82417

True values −0.41200 0.30900 0.68040 0.63030

Table 4
The OE-LSI parameter estimates and errors (L = 1000, σ 2 = 1.002 , δns = 84.91%).
k a1 a2 b1 b2 δ (%)
1 0.04815 0.04105 0.66611 0.90160 56.34830
2 −0.32325 0.19451 0.65722 0.69180 14.99440
3 −0.38290 0.29870 0.65991 0.65212 4.05299
4 −0.36268 0.27973 0.66086 0.66341 6.50856
5 −0.36708 0.28078 0.66065 0.66035 6.04132
6 −0.36752 0.28194 0.66069 0.66012 5.95351
7 −0.36718 0.28161 0.66069 0.66032 5.99794
8 −0.36724 0.28163 0.66069 0.66028 5.99105
9 −0.36725 0.28165 0.66069 0.66028 5.99003
10 −0.36724 0.28164 0.66069 0.66028 5.99071

True values −0.41200 0.30900 0.68040 0.63030

Consider the following output error system:

B ( z)
y (t ) = u (t ) + v (t ),
A ( z)
A ( z) = 1 + a1 z−1 + a2 z−2 = 1 − 0.412z−1 + 0.309z−2 ,
B ( z) = b1 z−1 + b2 z−2 = 0.6804z−1 + 0.6303z−2 ,
θ = [a1 , a2 , b1 , b2 ]T = [−0.412, 0.309, 0.6804, 0.6303]T .
Here, {u (t )} is taken as a persistent excitation signal sequence with zero mean and unit variance, and { v (t )} as a white noise
sequence with zero mean and variance σ 2 . Applying the AM-RLS algorithm in (10)–(13) and OE-LSI algorithm in (32)–(36)
to estimate the parameters (ai , b i ) of this system with different σ 2 , the OE-LSI and AM-RLS parameter estimates and their
estimation errors shown in Tables 1–8 with t = L = 1000, 2000 and 3000, where the parameter estimation errors are defined
by δ := θ̂(t ) − θ/θ for the AM-RLS algorithm or δ := θ̂ k − θ/θ for the OE-LSI algorithm, the estimation errors
674 F. Ding et al. / Digital Signal Processing 20 (2010) 664–677

Table 5
The OE-LSI parameter estimates and errors (L = 2000, σ 2 = 0.302 , δns = 25.47%).
k a1 a2 b1 b2 δ (%)
1 −0.00208 0.01398 0.68334 0.91859 54.81751
2 −0.34577 0.22181 0.67846 0.68134 11.38856
3 −0.41975 0.31535 0.67926 0.62930 0.95527
4 −0.40355 0.30225 0.67955 0.63935 1.33179
5 −0.40611 0.30236 0.67953 0.63760 1.08642
6 −0.40660 0.30338 0.67952 0.63726 0.98824
7 −0.40631 0.30308 0.67953 0.63746 1.03031
8 −0.40637 0.30311 0.67953 0.63742 1.02343
9 −0.40637 0.30312 0.67953 0.63742 1.02284
10 −0.40637 0.30311 0.67953 0.63742 1.02345

True values −0.41200 0.30900 0.68040 0.63030

Table 6
The OE-LSI parameter estimates and errors (L = 2000, σ 2 = 1.002 , δns = 84.91%).
k a1 a2 b1 b2 δ (%)
1 0.00086 0.01669 0.68097 0.92510 55.18984
2 −0.32430 0.20131 0.67669 0.70342 14.79863
3 −0.40211 0.29749 0.67720 0.64891 2.28369
4 −0.39124 0.28868 0.67750 0.65539 3.62896
5 −0.39266 0.28849 0.67749 0.65439 3.50590
6 −0.39305 0.28919 0.67749 0.65412 3.43426
7 −0.39289 0.28903 0.67749 0.65422 3.45674
8 −0.39292 0.28904 0.67749 0.65421 3.45421
9 −0.39292 0.28904 0.67749 0.65421 3.45372
10 −0.39292 0.28904 0.67749 0.65421 3.45398

True values −0.41200 0.30900 0.68040 0.63030

Table 7
The OE-LSI parameter estimates and errors (L = 3000, σ 2 = 0.302 , δns = 25.47%).
k a1 a2 b1 b2 δ (%)
1 0.00021 0.00306 0.67899 0.91250 55.21860
2 −0.35229 0.23736 0.67997 0.67100 9.59132
3 −0.42407 0.32375 0.68017 0.62170 1.97078
4 −0.40797 0.31165 0.67995 0.63283 0.51484
5 −0.40948 0.31123 0.68002 0.63186 0.35166
6 −0.40993 0.31204 0.68002 0.63152 0.36724
7 −0.40972 0.31183 0.68001 0.63167 0.36770
8 −0.40976 0.31185 0.68002 0.63165 0.36616
9 −0.40976 0.31185 0.68002 0.63165 0.36654
10 −0.40976 0.31185 0.68002 0.63165 0.36651

True values −0.41200 0.30900 0.68040 0.63030

Table 8
The OE-LSI parameter estimates and errors (L = 3000, σ 2 = 1.002 , δns = 84.91%).
k a1 a2 b1 b2 δ (%)
1 0.00211 −0.00267 0.67832 0.91166 55.59027
2 −0.34399 0.24763 0.67896 0.67607 9.65385
3 −0.41581 0.32715 0.67928 0.62696 1.77970
4 −0.40363 0.31859 0.67908 0.63536 1.29721
5 −0.40437 0.31794 0.67913 0.63491 1.19599
6 −0.40475 0.31857 0.67912 0.63463 1.20861
7 −0.40461 0.31841 0.67912 0.63474 1.20910
8 −0.40463 0.31843 0.67912 0.63472 1.20824
9 −0.40463 0.31843 0.67912 0.63472 1.20847
10 −0.40463 0.31843 0.67912 0.63472 1.20844

True values −0.41200 0.30900 0.68040 0.63030


F. Ding et al. / Digital Signal Processing 20 (2010) 664–677 675

Fig. 5. The estimation errors versus k (L = 1000).

Fig. 6. The estimation errors versus k (L = 2000).

δ = θ̂ k − θ /θ versus k are shown in Figs. 5–7 with t = L = 1000, 2000 and 3000. When σ 2 = 0.302 and σ 2 = 1.002 , the
corresponding noise-to-signal ratios are δns = 25.47% and δns = 84.91%, respectively, where the noise-to-signal ratio of the
system is defined by the square root of the variance ratio of the system output driven by the noise v (t ) and the noise-free
output x(t ) (namely, the output y (t ) when v (t ) ≡ 0). For the OE system in (1), δns is computed by the formula [23]

var[ v (t )]
δns = × 100%,
var[x(t )]
B ( z)
x(t ) = u (t ).
A ( z)
For the OEMA system in (37), δns is computed by the formula

var[ w (t )]
δns = × 100%,
var[x(t )]
B ( z)
x(t ) = u (t ), w (t ) = D ( z) v (t ).
A ( z)
From the simulation results of Tables 1–8 and Figs. 5–7, we can draw the following conclusions.

• As the noise-to-signal ratio increases, the parameter estimation errors given by the AM-RLS algorithm become large and
the parameter estimates converge to their true values slowly for the same data length t = L—see Tables 1 and 2.
676 F. Ding et al. / Digital Signal Processing 20 (2010) 664–677

Fig. 7. The estimation errors versus k (L = 3000).

• As the noise-to-signal ratio increases, the parameter estimation errors given by the OE-LSI algorithm become large and
the parameter estimates are close to their true values slowly. As the data length L increases, the parameter estimation
errors the OE-LSI algorithm become gradually small—see Tables 3–8 and Figs. 5–7.

6. Conclusions

This paper presents a gradient based iterative and a least-squares based iterative algorithms for output error systems
based on the interactive estimation theory. Since the proposed iterative algorithms make sufficient use of all the measured
input–output data, they provide more accurate estimation of system parameters than the recursive approaches. Although the
iterative methods are developed for the output error (moving average) models, the proposed techniques can be extended
to other general systems such as ARMAX and Box–Jenkins models, and Hammerstein nonlinear ARMAX models [4,25,27],
non-uniformly sampled multirate systems [28,29], time-series models [26]. The identification methods in the paper can be
applied to estimate system parameters as the basis of designing filters or feedback control laws for uncertain systems or
multirate systems [30–34].

References

[1] M.B. Malik, M. Salman, State-space least mean square, Digital Signal Process. 18 (3) (2008) 334–345.
[2] S.X. Zhao, F.C. Wang, H. Xu, J. Zhu, Multi-frequency identification method in signal processing, Digital Signal Process. 19 (4) (2009) 555–566.
[3] L.L. Han, F. Ding, Multi-innovation stochastic gradient algorithms for multi-input multi-output ARX systems, Digital Signal Process. 19 (4) (2009) 545–
554.
[4] D.Q. Wang, F. Ding, Extended stochastic gradient identification algorithms for Hammerstein–Wiener ARMAX systems, Comput. Math. Appl. 56 (12)
(2008) 3157–3164.
[5] F. Ding, T. Chen, Performance analysis of multi-innovation gradient type identification methods, Automatica 43 (1) (2007) 1–14.
[6] F. Ding, P.X. Liu, G. Liu, Auxiliary model based multi-innovation extended stochastic gradient parameter estimation with colored measurement noises,
Signal Process. 89 (10) (2009) 1883–1890.
[7] F. Ding, T. Chen, Combined parameter and output estimation of dual-rate systems using an auxiliary model, Automatica 40 (10) (2004) 1739–1748.
[8] F. Ding, T. Chen, Parameter estimation of dual-rate stochastic systems by using an output error method, IEEE Trans. Automat. Control 50 (9) (2005)
1436–1441.
[9] F. Ding, T. Chen, Hierarchical least squares identification methods for multivariable systems, IEEE Trans. Automat. Control 50 (3) (2005) 397–402.
[10] F. Ding, T. Chen, Hierarchical gradient-based identification of multivariable discrete-time systems, Automatica 41 (2) (2005) 315–325.
[11] G.C. Goodwin, K.S. Sin, Adaptive Filtering Prediction and Control, Prentice Hall, Englewood Cliffs, NJ, 1984.
[12] L. Ljung, System Identification: Theory for the User, 2nd ed., Prentice Hall, Englewood Cliffs, NJ, 1999.
[13] T.L. Lai, C.Z. Wei, Extended least squares and their applications to adaptive control and prediction in linear systems, IEEE Trans. Automat. Control 31 (10)
(1986) 898–906.
[14] V. Solo, The convergence of AML, IEEE Trans. Automat. Control 24 (6) (1979) 958–962.
[15] L. Dugard, I.D. Landau, Recursive output error identification algorithms theory and evaluation, Automatica 16 (5) (1980) 443–462.
[16] P. Stoica, T. Söderström, Analysis of an output error identification algorithm, Automatica 17 (6) (1981) 861–863.
[17] P. Stoica, T. Söderström, Bias correction in least-squares identification, Int. J. Control 35 (3) (1982) 49–457.
[18] P. Stoica, T. Söderström, V. Simonyte, Study of a bias-free least squares parameter estimator, IEE Proc. Control Theory Appl. 142 (1) (1995) 1–6.
[19] W.X. Zheng, On a least-squares-based algorithm for identification of stochastic linear systems, IEEE Trans. Signal Process. 46 (6) (1998) 1631–1638.
[20] W.X. Zheng, Least-squares identification of a class of multivariable systems with correlated disturbances, J. Franklin Inst. 336 (8) (1999) 1309–1324.
[21] W.X. Zheng, A modified method for closed-loop identification of transfer function models with common factors, IEEE Trans. Circuits Syst. I 49 (4)
(2002) 556–562.
[22] W.X. Zheng, Parameter estimation of stochastic linear systems with noisy input, Int. J. Syst. Sci. 35 (3) (2004) 185–190.
[23] F. Ding, T. Chen, L. Qiu, Bias compensation based recursive least squares identification algorithm for MISO systems, IEEE Trans. Circuits Syst. II 53 (5)
(2006) 349–353.
F. Ding et al. / Digital Signal Processing 20 (2010) 664–677 677

[24] F. Ding, P.X. Liu, On iterative parameter estimation algorithms for OE and OEMA systems, in: IEEE International Instrumentation and Measurement
Technology Conference (I2MTC 2008), Victoria, Canada, May 12–15, 2008, pp. 1603–1608.
[25] F. Ding, T. Chen, Identification of Hammerstein nonlinear ARMAX systems, Automatica 41 (9) (2005) 1479–1480.
[26] F. Ding, Y. Shi, T. Chen, Performance analysis of estimation algorithms of non-stationary ARMA processes, IEEE Trans. Signal Process. 54 (3) (2006)
1041–1053.
[27] F. Ding, Y. Shi, T. Chen, Auxiliary model based least-squares identification methods for Hammerstein output-error systems, Syst. Control Lett. 56 (5)
(2007) 373–380.
[28] F. Ding, L. Qiu, T. Chen, Reconstruction of continuous-time systems from their non-uniformly sampled discrete-time systems, Automatica 45 (2) (2009)
324–332.
[29] Y.J. Liu, L. Xie, F. Ding, An auxiliary model based recursive least squares parameter estimation algorithm for non-uniformly sampled multirate systems,
in: Proceedings of the Institution of Mechanical Engineers, Part I, J. Syst. Control Eng. 223 (4) (2009) 445–454.
[30] Y. Shi, B. Yu, Output feedback stabilization of networked control systems with random delays modeled by Markov chains, IEEE Trans. Automat. Con-
trol 54 (7) (2009) 1668–1674.
[31] M. Yan, Y. Shi, Robust discrete-time sliding mode control for uncertain systems with time-varying state delay, IET Control Theory Appl. 2 (8) (2008)
662–674.
[32] B. Yu, Y. Shi, H. Huang, l2 –l∞ filtering for multirate systems using lifted models, Circuits Systems Signal Process. 27 (5) (2008) 699–711.
[33] Y. Shi, F. Ding, T. Chen, Multirate crosstalk identification in xDSL systems, IEEE Trans. Commun. 54 (10) (2006) 1878–1886.
[34] Y. Shi, F. Ding, T. Chen, 2-norm based recursive design of transmultiplexers with designable filter length, Circuits Systems Signal Process. 25 (4) (2006)
447–462.

Feng Ding was born in Guangshui, Hubei Province. He received the B.Sc. degree from the Hubei University of Technology (Wuhan,
China) in 1984, and the M.Sc. and Ph.D. degrees in automatic control both from the Department of Automation, Tsinghua University in
1991 and 1994, respectively.
From 1984 to 1988, he was an Electrical Engineer at the Hubei Pharmaceutical Factory, Xiangfan, China. From 1994 to 2002, he was
with the Department of Automation, Tsinghua University, Beijing, China and he was a Research Associate at the University of Alberta,
Edmonton, Canada from 2002 to 2005.
He was a Visiting Professor in the Department of Systems and Computer Engineering, Carleton University, Ottawa, Canada from May to
December 2008 and a Research Associate in the Department of Aerospace Engineering, Ryerson University, Toronto, Canada, from January
to October 2009.
He has been a Professor in the School of Communication and Control Engineering, Jiangnan University, Wuxi, China since 2004.
He is a Colleges and Universities “Blue Project” Middle-Aged Academic Leader (Jiangsu, China). His current research interests include
model identification and adaptive control. He co-authored the book Adaptive Control Systems (Tsinghua University Press, Beijing, 2002), and
published over 100 papers on modeling and identification as the first author.

Peter X. Liu received his B.Sc. and M.Sc. degrees from Northern Jiaotong University, China in 1992 and 1995, respectively, and Ph.D.
degree from the University of Alberta, Canada in 2002. He has been with the Department of Systems and Computer Engineering, Carleton
University, Canada since July 2002 and he is currently a Canada Research Chair Professor. His interest includes interactive networked
systems and teleoperation, haptics, micro-manipulation, robotics, intelligent systems, context-aware intelligent networks, and their appli-
cations to biomedical engineering.
Dr. Liu has published more than 150 research articles. He serves as an Associate Editor for several journals including IEEE/ASME
Transactions on Mechatronics, IEEE Transactions on Automation Science and Engineering, Intelligent Service Robotics, Int. J. of Robotics
and Automation, Control and Intelligent Systems and Int. J. of Advanced Media and Communication. He received a 2007 Carleton Research
Achievement Award, 2006 Province of Ontario Early Researcher Award, 2006 Carty Research Fellowship, the Best Conference Paper Award
of the 2006 IEEE International Conference on Mechatronics and Automation, and a 2003 Province of Ontario Distinguished Researcher
Award. He has served in the Organization Committees of numerous conferences including being the General Chair of the 2008 IEEE
International Workshop on Haptic Audio Visual Environments and Their Applications, and the General Chair of 2005 IEEE International
Conference on Mechatronics and Automation.
Dr. Liu is a member of the Professional Engineers of Ontario (P. Eng) and a senior member of IEEE.

Guangjun Liu received his B.E. degree from the University of Science and Technology of China in 1984, M.E. from the Chinese Academy
of Sciences in 1987, and Ph.D. from the University of Toronto in 1996. He is currently a Professor and Canada Research Chair in Control
Systems and Robotics, with the Department of Aerospace Engineering, Ryerson University, Toronto, Canada. Before becoming a faculty
member in 1999, he worked as a systems engineer and design lead for Honeywell Aerospace Canada on the Boeing X-32 program. He was
a postdoctoral fellow with Massachusetts Institute of Technology, USA, in 1996 before he joined Honeywell in 1997. Dr. Liu has authored
or co-authored more than 130 research articles and 5 patents. His research interests are in the areas of control systems and robotics.
Dr. Liu is a licensed member of the Professional Engineers of Ontario, Canada, and a senior member of IEEE.

You might also like