迭代估计带应用
迭代估计带应用
迭代估计带应用
https://doi.org/10.1007/s00034-021-01884-6
SHORT PAPER
Abstract
This paper is concerned with the distributed estimation problem for nonlinear systems
with unknown but bounded noises, where the bounds of noises are also unknown. By
modeling a new distributed estimation error system including the bias among local
estimation errors, a bounded recursive optimization scheme is constructed to deter-
mine the matrix gains of distributed estimator. Notice that the constructed optimization
problem for each local estimator, which can be easily solved by the existing software
applications, is only dependent on the itself information and its neighboring informa-
tion, and thus the design of distributed estimator and the determination of estimator
gains are all fully distributed. Finally, a range-only target tracking system is employed
to show the effectiveness and advantages of the proposed methods.
This work was supported by the Six-field Talent Peak Project of Jiangsu Province under Grant
2017-XYDXX-105.
B Bo Chen
bchen@aliyun.com
Yidi Teng
tengyidi@163.com
Shouzhao Sheng
shengsz@nuaa.edu.cn
1 Introduction
Distributed estimation has attracted considerable research interest during the past
decades and has found applications in a variety of areas, including smart grids [16,
17], multi-agent systems [5,15], and target tracking systems [1,25]. Generally, the
framework of the distributed estimation problem considers a multiple-outputs plant
monitored by a network of sensors, where each sensor can estimate the plant state
based on its own information and its neighboring information [2]. When designing
distributed estimation over sensor networks, power efficiency is an important issue,
because it is closely related to the network lifetime [14]. To enhance power efficiency,
an attractive approach is to introduce the cluster sensor networks, i.e., the sensors
are divided into several clusters and each cluster head (CH) communicates with its
neighboring CHs [21]. Though different distributed estimation algorithms have been
developed (see [8,20,22,23], and the references therein) for linear systems, the physical
processes in most practical systems are nonlinear. In this sense, this paper will focus
on the design of distributed nonlinear estimator in cluster sensor networks.
When considering Gaussian white noises with known covariances, a distributed
extended Kalman filter (EKF) was developed in [13] for nonlinear systems based on
consensus strategy. Then, a robust distributed EKF was designed in [11] for uncertain
nonlinear systems, and a distributed multiple-model unscented Kalman filter (UKF)
was derived in [12] for Markov nonlinear systems. Based on cubature Kalman filter
technique, a nonlinear robust two-stage Kalman filter was designed in [9] for nonlinear
systems with unknown inputs. Moreover, two kinds of robust distributed estimation
methods were developed in [8] to estimate the parameters of nonlinear Hammerstein
systems with missing data. Meanwhile, Bayesian methodology was used in [4] to
formulate the optimal solution to the problem of cooperative estimation of a time-
varying signal over a partially connected network of multiple agents. Notice that the
above-mentioned distributed nonlinear estimation methods need to know the statistical
properties of noises. However, these statistical properties may not be obtained or
satisfied in many practical systems. Under this case, when considering the energy-
bounded noises without knowing the statistical information, a distributed H∞ filtering
algorithm was developed in [10] for a class of Markovian-jump nonlinear time-delay
systems over lossy sensor networks, while the distributed finite-horizon H∞ filtering
method was proposed in [19] for nonlinear systems over sensor networks with round-
Robin protocol and fading channels. By resorting to the T-S fuzzy method for modeling
nonlinear systems, distributed fuzzy estimation methods were designed in [18,24]
under different resource constraints. It should be pointed out that the noises may
not disappear for physical process or sensors, while the energy bounded noises will
become 0 as the time goes to infinite. This means that the methods in [10,18,19,24]
cannot be suitable for nonlinear systems with persistent noise.
Motivated by the aforementioned analysis, we shall consider distributed nonlinear
estimation problem with unknown but bounded persistent noises, where the lower and
upper bounds of noises are also unknown. By introducing a special topological struc-
ture and using the Taylor series expansion, a new distributed estimation error model
is proposed, where the bias between local estimators and linearized errors are viewed
as unknown but bounded noises. Then, a bounded recursive optimization approach
Circuits, Systems, and Signal Processing (2022) 41:2397–2410 2399
2 Problem Formulation
Δ
where x(t) ∈ Rn is the system state, f (x(t)) = col{ f 1 (x(t)), . . . , f n (x(t))} is a
nonlinear vector function that is assumed to be continuously differentiable, while
Γ (t) ∈ Rm is a time-varying matrix. w(t) is an unknown but bounded noise satisfying
||w(t)||2 ≤ δw (2)
Δ
where gi (x(t)) = col{gi,1 (x(t)), . . . , gi,qi (x(t))} is a nonlinear vector function that
is assumed to be continuously differentiable, while vi (t) is an unknown but bounded
noise satisfying
Let x̂i (t) denote the nonlinear state estimate of x(t) at the ith CH, and let L CHs be
numbered from 1 to L. Then, the set Ni is introduced to consist of the front neighboring
CH number of the ith CH, i.e., Ni is given by:
Δ
Ni = {∃ κi |κi ∈ Ωi } (5)
Δ
where Ωi = {1, . . . , i − 1}. Then, the distributed nonlinear state estimator (DNSE) is
proposed to has the following form:
p p p
x̂i (t) = x̂i (t) + Ki (t)[yi (t) − gi (x̂i (t))] + κi ∈Ni Ki,κi (t)[yκi (t) − gκi (x̂κi (t))]
p
x̂i (t) = f (x̂i (t − 1))(i ∈ {1, 2, . . . , L})
(6)
p
where O xi (t) is one-step prediction at each CH, the estimator gains Ki (t) and Ki,κi (t)
are to be designed. Consequently, the aim of this paper is to design a group of optimal
estimator gains Ki (t) and Ki,κi (t)(κi ∈ Ni ) such that an upper bound of the square
error of the x̂i (t) is minimal at each time.
3 Main Results
Let τi denote the number of elements in the set Ni . Then, the set Ni can also be
rewritten as:
Δ
Ni = {κ1i ,κ2i , . . . ,κτi i } (7)
where κ1i < κ2i < · · · < κτi i < L. Before deriving the main results, we firstly define
the following matrix variables:
⎧
⎪ Δ ∂ f (x(t−1))
⎪
⎪ A i (t − 1) =
∂x(t−1)
x(t−1)=x̂ (t−1)
⎪
⎪
F
⎪
⎪ Δ ∂ gi (x(t))
i
⎪
⎪ C i (t) =
⎪
⎪ J ∂x(t) x(t)=x̂ p (t)
⎪
⎪ i
⎪
⎪ Δ
⎪ HKi (t) = [I − Ki (t)C iJ (t)]
⎪
⎪
⎪ Δ κi
⎪ C̄ κi (t) =
⎪ C J (t)AκFi (t − 1)
⎪
⎨ J
Δ κi
AKFi (t − 1) = HKi (t)AiF (t − 1) − κi ∈Ni Ki,κi (t)C̄ J (t) (8)
⎪
⎪
⎪
⎪ EK (t)
Δ
= (t) − (t)C κi
(t) Γ (t) − (t)
⎪
⎪ Fi H K i κi ∈Ni K i,κ i J K i
⎪
⎪
⎪
⎪ i (t) =Δ
[ K (t), · · · , K (t) ]
⎪
⎪ K V
i
i,κτ i,κτ i
⎪
⎪
1 i
⎪
⎪ (t)
Δ
= [
κτi κτi
, Ki,κτi (t)C̄ J i (t) ]
i
⎪
⎪ K A K i (t) C̄ 1
(t), · · ·
⎪
⎪
i,κτ
1
J i
⎪
⎪ Δ κi κi
⎩ KiO (t) = [ Ki,κ i (t)C Jτ1 (t)Γ (t), · · · , Ki,κ i (t)C Jτi (t)Γ (t) ]
τ1 τi
Circuits, Systems, and Signal Processing (2022) 41:2397–2410 2401
Theorem 1 For a given parameter ζi > 0, a group of optimal estimator gain Ki (t) and
Ki,κi (t) (κi ∈ Ni ) in (6) are obtained by solving the following convex optimization
problem:
with
Δ
BKFi (t) = [EK
Fi (t) − KVi (t)]
Δ (10)
C (t) = [K i (t) K i (t)]
EKi A O
Proof Define
Δ
ei (t) = x(t) − x̂i (t)
p Δ p
(11)
ei (t) = x(t) − x̂i (t)
By using the first-order Taylor expansion, nonlinear functions f (x(t)) and gi (x(t))
can be written as follows:
f (x(t)) = f (x̂i (t)) + AiF (t)ei (t) + O f (ei (t))
p p p (13)
gi (x(t)) = g(x̂i (t)) + C iJ (t)ei (t) + Ogi (ei (t))
p
where O f (ei (t)) and Ogi (ei (t)) represent the linearized errors, while AiF (t) and C iJ (t)
have been defined by (8). Then, it is obvious that
p p
ei (t) = AiF (t − 1)ei (t − 1) + Γ (t)w(t − 1) + O f (ei (t − 1)) (14)
Since the linearized error O f (ei (t − 1)) in (15) is unknown and the noise w(t − 1)
is also unknown, it is proposed to introduce Γ (t)w̃i (t − 1) for depicting the term
2402 Circuits, Systems, and Signal Processing (2022) 41:2397–2410
“Γ (t)w(t − 1) + O f (ei (t − 1))” in (14). In this case, one has by (12), (13) and (14)
that
Δ Δ
Define ẽκi i (t) = ei (t)−eκi (t) and Õκi i (t) = w̃i (t)− w̃κi (t). Then, (15) can be rewritten
as:
ei (t) = [I− Ki (t)C iJ (t)]AiF (t − 1)
κi κi
− K (t)C (t)A (t − 1) ei (t − 1)
κi ∈Ni
i,κ J F
i
κi
+ [I − Ki (t)C J (t)] −
i
κi ∈Ni Ki,κi (t)C J (t)
p
×Γ(t)w̃i (t − 1) − Ki (t)[Ogi (ei (t)) + vi (t)] (16)
p
− κi ∈Ni Ki,κi (t)[Ogκi (eκi (t)) + vκi (t)]
+ κi ∈Ni Ki,κi (t)C κJi (t)AκFi (t − 1)ẽκi i (t − 1)
+ κi ∈Ni Ki,κi (t)C κJi (t)Γ (t) Õκi i (t − 1)
where AKFi (t − 1), EKFi (t), KV (t), K A (t) and K O (t) are defined by (8). It should be
i i i
pointed out that the terms “χei (t − 1)” and “χoi (t − 1)” in (18), which are generated
from the neighboring information, are difficult to be accurately known for the ith CH
under the distributed estimation structure. Hence, “χei (t − 1)” and “χvi (t − 1)” are
Δ
proposed to be viewed as unknown disturbances, and then let us define χev i (t − 1) =
col{v̂κ (t − 1), χei (t − 1), χvi (t − 1)}. In this case, (18) reduces to
i
For the distributed estimation error system (19), we introduce the following perfor-
mance index [6,7]:
Δ
Ji (t) = eiT (t)ei (t) − eiT (t − 1)Pi (t)ei (t − 1)
−[χwi (t − 1)]T Θi (t)[χwi (t − 1)]
−[χev i (t − 1)]T Φ (t)[χ i (t − 1)]
i ev (20)
−2ei (t − 1)Pi1 (t)[χwi (t − 1)]
T
where Pi (t) > 0, Θi (t) > 0 and Φi (t) > 0, Pi1 , Pi2 and Θi1 are the optimization
variable matrices. When Ji (t) < 0, it follows from (19) and (20) that
⎡ ⎤
Ξ11
i (t) Ξ i (t) Ξ i (t)
12 13
ξiT (t) ⎣ ∗ Ξ22 i (t) Ξ i (t) ⎦ ξ (t) < 0
23 i (21)
∗ ∗ Ξ33 i (t)
Ξi (t)
Δ
where ξi (t − 1) = col{ei (t − 1), χwi (t − 1), χev
i (t − 1)}, and
⎧ i
⎪
⎪ Ξ11 (t) = (AKFi (t − 1))T AKFi (t − 1) − Pi (t)
⎪
⎪
⎪
⎪ Ξ12
i (t) = (AK (t − 1))T BK (t) − P (t)
i1
⎪ F F
⎨ Ξ i (t) = (AKi (t − 1))T ECi (t) − P (t)
13 Fi Ki i2
(22)
⎪
⎪ Ξ i (t) = (BK (t))T BK (t) − Θ (t)
i
⎪
⎪
22 Fi Fi
⎪
⎪ Ξ i (t) = (BK (t))T EC (t) − Θ (t)
⎪
⎩ i 23 F i K i i1
Ξ33 (t) = (EK
C (t))T EC (t) − Φ (t)
i Ki i
By using the Schur complement lemma [3], Ξi (t) < 0 in (22) directly yields that the
first inequality in (9) hold. Under this case, it is derived from (20) that
⎡ ⎤
Pi (t) Pi1 (t) Pi2 (t)
eiT (t)ei (t) < ξiT (t − 1) ⎣ ∗ Θi (t) Θi1 (t) ⎦ ξi (t − 1) (23)
∗ ∗ Φi (t)
Notice that
⎡ ⎤ ⎧ ⎡ ⎤⎫
Pi (t) Pi1 (t) Pi2 (t) ⎨ Pi (t) Pi1 (t) Pi2 (t) ⎬
ξiT (t − 1) ⎣ ∗ Θi (t) Θi1 (t) ⎦ ξi (t − 1) = Tr ξi (t − 1)ξiT (t − 1) ⎣ ∗ Θi (t) Θi1 (t) ⎦
⎩ ⎭
∗ ∗ Φi (t) ⎧⎡ ⎤⎫ ∗ ∗ Φi (t)
⎨ Pi (t) Pi1 (t) Pi2 (t) ⎬ (24)
≤ λmax (ξi (t − 1)ξiT (t − 1))Tr ⎣ ∗ Θi (t) Θi1 (t) ⎦
⎩ ⎭
∗ ∗ Φi (t)
T
= λmax (ξi (t − 1)ξi (t − 1))(Tr{Pi (t)} + Tr{Θi (t)} + Tr{Φi (t)})
Obviously, λmax (ξi (t − 1)ξiT (t − 1))(Tr{Pi (t)} + Tr{Θi (t)} + Tr{Φi (t)}) can be
viewed as an upper bound of eiT (t)ei (t). Since ξi (t − 1) has been determined at time t,
2404 Circuits, Systems, and Signal Processing (2022) 41:2397–2410
Then, the second inequality in (9) can be obtained from (25) by using Schur comple-
ment lemma. This completes the proof.
Based on Theorem 1, the computation procedures for the DNSE x̂i (t) are summa-
rized by Algorithm 1.
Remark 1 Following the processing idea in [10,18,19,24], when one designs the dis-
tributed nonlinear estimators, it is required to augment all local estimates xi (t)(i =
1, 2, . . . , L) for formulating an unified system, and thus the design process may still
need the global information. Different from the above-mentioned idea, the parameters
in the ith convex optimization problem (9) are only dependent on the information
from the ith cluster itself and its neighbors. This means that the design and solu-
tion of distributed nonlinear estimator are all completed based on local information,
and thus can achieve fully distributed. On the other hand, each optimization problem
(9) is established in terms of linear matrix inequalities, and thus it can be directly
solved by the function “mincx” of Matlab LMI Toolbox [3]. Moreover, it can be con-
cluded for optimization problem (9) that when the parameter ζi becomes large, the
searching space of optimization variables can be enlarged, and thus the solvability of
optimization problem can be thus increased. It should be pointed out that even if each
optimization problem (9) is solvable, the solutions cannot guarantee the stability of
the designed DNSE. In fact, how to determine the parameter ζi for guaranteeing the
stability of the DNSE is still be challenging and will be one of our future works.
Remark 2 Notice that the computational complexity of the proposed approach is based
on solving convex optimization problem (9), while the analytical analysis of this
optimization problem is hardly to be obtained. Under this case, the complexity of (9)
Circuits, Systems, and Signal Processing (2022) 41:2397–2410 2405
cost can be minimized by dwindling the dimension of the matrix inequalities in (9), and
the search space of the optimization problem can be shrunk by reducing the number
of the unknown parameters.
4 Simulation Examples
⎧ ⎡ up T0 u r
⎤
⎪
⎪ s x (t) + cos(θ (t) + )
⎪ Δ ⎢
⎨ f (x(t)) =
ur
u
2
⎥
⎣ sx (t) + urp sin(θ (t) + T02ur ) ⎦
(26)
⎪
⎪ θ (t) + T02ur
⎪
⎩ Δ Δ
Γ (t) = diag{1, 1, T0 }, w(t) = col{w1 (t), w2 (t), w3 (t)}
Here, T0 is the sampling period, u p is the motion command to control the transla-
tional velocity, u r is the motion command to control the rotational velocity, while
w1 (t), w2 (t), w3 (t) are unknown bounded noises. For this example, the above-
mentioned parameters are chosen as T0 =0.5, u p = 0.075 and u r = 0.5.
In X-Y plane, it is considered that 12 sensors are employed to collect the target
information, and each sensor’s position is denoted as (sx j , s y j )( j = 1, 2, 3, . . . , 12).
Then, the distance from the robot’s planner Cartesian coordinates (sx (t), s y (t)) to each
sensor (sx j , s y j ) can be expressed as follows:
!
d j (t) = (sx j − sx (t))2 + (s y j − s y (t))2 + v j (t) ( j = 1, 2, 3, . . . , 12) (27)
where v j (t) is the unknown measurement noise, and the positions (sx j , s y j ) in X-Y
plane are given by (sx1 , s y1 ) = (5, 15), (sx2 , s y2 ) = (7, 20), (sx3 , s y3 ) = (10, 17),
(sx4 , s y4 ) = (15, 28), (sx5 , s y5 ) = (20, 20), (sx6 , s y6 ) = (16, 19), (sx7 , s y7 ) =
(20, 10), (sx8 , s y8 ) = (17, 5), (sx9 , s y9 ) = (13, 8), (sx10 , s y10 ) = (10, 5), (sx11 , s y11 ) =
(5, 3) and (sx12 , s y12 ) = (7, 12). Here, the noises w p (t), wr (t) and vi (t)(i = 1, . . . , 4)
2406 Circuits, Systems, and Signal Processing (2022) 41:2397–2410
Fig. 1 The moving robot is monitored by 12 sensors that are divided into 4 clusters, where Cluster 2 receives
the measurement information from Cluster 1, Cluster 3 receives the measurement information from Clusters
1 and 2, Cluster 4 receives the measurement information from Cluster 1 and 3
are given as
⎧
⎪
⎪ w p (t) = 0.2ρ p (t) − 0.1, wr (t) = 0.3ρr (t) − 0.1
⎪
⎪
⎨ v1 (t) = col{0.05ρv1 (t) − 0.02, 0.03ρv2 (t) − 0.01, 0.03ρv3 (t) − 0.01}
v2 (t) = col{0.03ρv4 (t) − 0.02, 0.05ρv5 (t) − 0.02, 0.06ρv6 (t) − 0.02} (28)
⎪
⎪
⎪
⎪ v3 (t) = col{0.05ρ v (t) − 0.01, 0.06ρv (t) − 0.02, 0.04ρv (t) − 0.02}
⎩ 7 8 9
v4 (t) = col{0.03ρv10 (t) − 0.01, 0.04ρv11 (t) − 0.01, 0.03ρv12 (t) − 0.01}
where ρ p (t)(∈ [0, 1]), ρr (t)(∈ [0, 1]) and ρv j (t)(∈ [0, 1])( j = 1, . . . , 12) are ran-
dom variables that can be generated by the function “rand” of MATLAB. Under this
case, 12 sensors are divided into 4 clusters (see Fig. 1), and the measurement infor-
mation at the CHs is given by:
⎧
⎪ y1 (t) = col{d1 (t), d2 (t), d3 (t)}
⎪
⎨
y2 (t) = col{d4 (t), d5 (t), d6 (t)}
(29)
⎪
⎪ y3 (t) = col{d7 (t), d8 (t), d9 (t)}
⎩
y4 (t) = col{d10 (t), d11 (t), d12 (t)}
Circuits, Systems, and Signal Processing (2022) 41:2397–2410 2407
(a) (b)
30 30
20 20
True trajectory True trajectory
DNSE at the Cluster 1 DNSE at the Cluster 2
10 10
0 0
-20 -10 0 10 -20 -10 0 10
(c) (d)
30 30
20 20
True trajectory True trajectory
DNSE at the Cluster 3 DNSE at the Cluster 4
10 10
0 0
-20 -10 0 10 -20 -10 0 10
Fig. 2 Trajectories of the state x(t) and the DNSE xi (t) at different clusters
Then, the linearization matrices AiF (t) and C iJ (t) in (8) are calculated by:
⎧ ⎡ ⎤
u
⎪
⎪
⎪
⎪ ⎢ ⎥
⎪
⎪ AiF (t) = ⎣ 0 1 uu p sin(θ (t) + T02ur ) ⎦
⎪
⎪ r
⎪
⎪ 00 1
⎪
⎪
x(t)=x∗
⎪
⎨ −s̃x j (t) −s̃ y j (t)
(30)
⎪
⎪ 1
xj yj xj yj
x(t)=x∗
⎪
⎪ (t) = (t), ϕ (t), ϕ (t)}
⎪ C
⎪ 2J col{ϕ 1 2 3
⎪
⎪ (t) = (t), ϕ (t), ϕ (t)}
⎪
⎪ C col{ϕ 4 5 6
⎪
⎪
J
3 (t) = col{ϕ (t), ϕ (t), ϕ (t)}
⎪
⎪ C 7 8 9
⎩ 4 J
C J (t) = col{ϕ10 (t), ϕ11 (t), ϕ12 (t)}
Δ Δ
where s̃x j (t) = sx j − sx (t) and s̃ y j (t) = s y j − s y (t).
In the simulation, it is specified that the unknown but bounded noises wi (t) and
vj (t) are generated from different random bounded functions. Notice that each sensor
in Fig. 1 is always placed on the edge, and the robot cannot reach this position, i.e.,
sx (t) = sx j , s y (t) = s y j (∀t). This means that d j (t) for this example is continuously
differentiable, and thus can satisfy the condition of measurement function in (2). Then,
when we choose ζi = 1(i = 1, 2, . . . , L), implementing Algorithm 1 by resorting to
LMI Toolbox in MATLAB, the trajectories of the robot x(t) and the DNSE xi (t) are
plotted in Fig. 2. It can be seen from this figure that the DNSE xi (t) can track the robot
well.
Due to the random noises in process system and measurement information, the esti-
mation performance is assessed by the mean square errors (MSEs) that are calculated
2408 Circuits, Systems, and Signal Processing (2022) 41:2397–2410
SMSE
MSE
10
0.1
5
0 0
0 200 400 600 0 200 400 600
t/step t/step
20 DNSE at the Cluster 3
20 DNSE at the Cluster 4
Estimator based on Cluster 3 Estimator based on Cluster 4
15 15
SMSE
SMSE
10 10
5 5
0 0
0 200 400 600 0 200 400 600
t/step t/step
Fig. 3 a Trajectories of the MSEs for the DNSEs xi (t) at different clusters; b–d Performance compar-
isons between the proposed method in this paper and each local estimators designed only based on the
measurement information from single cluster
by Monte Carlo method [1] with an average of 100 runs. Then, the MSEs of DNSEs
at different clusters are depicted by (a) of Fig. 3, which shows that the estimation
errors are bounded for this example, and thus can illustrate the effectiveness of the
proposed methods. Furthermore, to demonstrate its advantages, it will be compared
with the estimator designed only based on the measurement information from each
cluster itself, where the sum of MSE (SMSE) [25] is used to assess the estimation
precision by the Monte Carlo method with an average of 100 runs. Then, the perfor-
mance comparisons are shown by (b–d) of Fig. 3, and it is seen from these figures that
the estimation precision of the proposed methods in this paper is higher than that of
each local estimator based on the information from single cluster.
5 Conclusions
with communication uncertainties under the same framework of this paper will be one
of our future works.
Data Availability The data used to support the findings of this study are available from the corresponding
author upon request.
References
1. Y. Bar-Shalom, X.R. Li, T. Kirubarajan, Estimation with Applications to Tracking and Navigation
(Wiley, New York, 2001)
2. S. Battilotti, M. Mekhail, Distributed estimation for nonlinear systems. Automatica 107, 562–573
(2019)
3. S.P. Boyd, L.E. Ghaoui, E. Feron, V. Balakrishnan, Linear Matrix Inequalities in System and Control
Theory (SIAM, Philadelphia, 1994)
4. M.G.S. Bruno, S.S. Dias, A Bayesian interpretation of distributed diffusion filtering algorithms [Lecture
Notes]. IEEE Signal Process. Mag. 35, 118–123 (2018)
5. S. Chen, B. Chen, F. Shi, Distributed fault-tolerant consensus protocol for fuzzy multi-agent systems.
Circuits Syst. Signal Process. 38, 611–624 (2019)
6. B. Chen, G. Hu, Nonliear state estimation under bounded noises. Automatica 98, 15–168 (2018)
7. B. Chen, G. Hu, D.W.C. Ho, L. Yu, A new approach to linear/nonlinear distributed fusion estimation
problem. IEEE Trans. Autom. Control 64, 1301–1308 (2019)
8. S. Chen, Y. Liu, Robust distributed parameter estimation of nonlinear systems with missing data over
networks. IEEE Trans. Aerosp. Electron. Syst. 56, 2228–2244 (2020)
9. J. Ding, J. Xiao, Y. Zhang, M.N. Iqbal, Distributed state estimation for discrete-time nonlinear system
with unknown inputs. Circuits Systems Signal Process. 33, 3421–3441 (2014)
10. H. Dong, Z. Wang, H. Gao, Distributed H∞ filtering for a class of Markovian jump nonlinear time-delay
systems over lossy sensor networks. IEEE Trans. Ind. Electron. 60, 4665–4672 (2013)
11. P. Duan, Z. Duan, Y. Lv, G. Chen, Distributed finite-horizon extended Kalman filtering for uncertain
nonlinear systems. IEEE Trans. Cybernet. (2019). https://doi.org/10.1109/TCYB.2019.2919919
12. W. Li, Y. Jia, Consensus-based distributed multiple model UKF for jump Markov nonlinear systems.
IEEE Trans. Autom. Control 57, 227–233 (2012)
13. W. Li, Y. Jia, J. Du, Distributed extended Kalman filter with nonlinear consensus estimate. J. Franklin
Inst. 354, 7983–7995 (2017)
14. C. Lin, C. Wu, Linear coherent distributed estimation with cluster-based sensor networks. IET Signal
Proc. 6, 626–632 (2012)
15. A. Rastegarnia, Reduced-communication diffusion RLS for distributed estimation over multi-agent
networks. IEEE Trans. Circuits Syst. II Express Briefs 67, 177–181 (2020)
16. S.M. Shafiul Alam, B. Natarajan, A. Pahwa, S. Curto, Agent based state estimation in smart distribution
grid. IEEE Latin Am. Trans. 13, 496–502 (2015)
17. J. Shi, S. Liu, B. Chen, L. Yu, Distributed data-driven intrusion detection for sparse stealthy FDI
attacks in smart grids. IEEE Trans. Circuits Syst. II Express Briefs (2020). https://doi.org/10.1109/
TCSII.2020.3020139
18. X. Wang, G. Yang, Distributed event-triggered H∞ filtering for discrete-time TCS fuzzy systems over
sensor networks. IEEE Trans. Syst. Man Cyberneti. Syst. 50, 3269–3280 (2020)
19. Y. Xu, R. Lu, P. Shi, H. Li, S. Xie, Finite-time distributed state estimation over sensor networks with
round-robin protocol and fading channels. IEEE Trans. Cybernet. 48, 336–345 (2018)
20. F. Yang, Q. Han, Y. Liu, Distributed H∞ state estimation over a filtering network with time-varying
and switching topology and partial information exchange. IEEE Trans. Cybernet. 49, 870–882 (2019)
21. O. Younis, M. Krunz, S. Ramasubramanian, Node clustering in wireless sensor networks: recent
developments and deployment challenges. IEEE Netw. 20, 20–25 (2006)
22. H. Zayyani, Robust minimum disturbance diffusion LMS for distributed estimation. IEEE Trans.
Circuits Syst.-II Express Briefs 68, 521–525 (2021)
23. H. Zayyani, M. Korki, F. Marvasti, Bayesian hypothesis testing detector for one bit diffusion LMS
with blind missing samples. Signal Process. 146, 61–65 (2018)
2410 Circuits, Systems, and Signal Processing (2022) 41:2397–2410
24. D. Zhang, W. Cai, L. Xie, Q. Wang, Nonfragile distributed filtering for TCS fuzzy systems in sensor
networks. IEEE Trans. Fuzzy Syst. 23, 1883–1890 (2015)
25. D. Zhu, B. Chen, Z. Hong, L. Yu, Networked nonlinear fusion estimation under DoS attacks. IEEE
Sens. J. (2020). https://doi.org/10.1109/JSEN.2020.3039918
Publisher’s Note Springer Nature remains neutral with regard to jurisdictional claims in published maps
and institutional affiliations.