Research Article: New Models For Solving Time-Varying Lu Decomposition by Using ZNN Method and Zead Formulas
Research Article: New Models For Solving Time-Varying Lu Decomposition by Using ZNN Method and Zead Formulas
Research Article: New Models For Solving Time-Varying Lu Decomposition by Using ZNN Method and Zead Formulas
Journal of Mathematics
Volume 2021, Article ID 6627298, 13 pages
https://doi.org/10.1155/2021/6627298
Research Article
New Models for Solving Time-Varying LU Decomposition by Using
ZNN Method and ZeaD Formulas
Liangjie Ming,1,2 Yunong Zhang ,2,3 Jinjin Guo,2,3 Xiao Liu,1,2 and Zhonghua Li4
1
School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China
2
Research Institute of Sun Yat-sen University in Shenzhen, Shenzhen 518057, China
3
School of Computer Science and Engineering, Sun Yat-sen University, Guangzhou 510006, China
4
School of Intelligent Systems Engineering, Sun Yat-sen University, Guangzhou 510006, China
Received 13 November 2020; Revised 22 March 2021; Accepted 28 March 2021; Published 22 April 2021
Copyright © 2021 Liangjie Ming et al. This is an open access article distributed under the Creative Commons Attribution License,
which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.
In this paper, by employing the Zhang neural network (ZNN) method, an effective continuous-time LU decomposition (CTLUD)
model is firstly proposed, analyzed, and investigated for solving the time-varying LU decomposition problem. Then, for the
convenience of digital hardware realization, this paper proposes three discrete-time models by using Euler, 4-instant Zhang et al.
discretization (ZeaD), and 8-instant ZeaD formulas to discretize the proposed CTLUD model, respectively. Furthermore, the
proposed models are used to perform the LU decomposition of three time-varying matrices with different dimensions. Results
indicate that the proposed models are effective for solving the time-varying LU decomposition problem, and the 8-instant ZeaD
LU decomposition model has the highest precision among the three discrete-time models.
problems are becoming more and more important [16–18]. 2. Problem Formulation
Besides, an important application of time-varying LU
decomposition is solving the angle-of-arrival (AoA) Generally, the time-varying LU decomposition problem can
localization problem [19], which is consistent with the time- be formulated as follows:
varying linear system and has been widely applied in various A(t) � L(t)U(t), (1)
fields [20, 21]. The AoA localization problem can be for-
mulated as A(t)x(t) � b(t). Thus, using a model or method where A(t) ∈ Rn×n denotes a smoothly time-varying matrix
(which can obtain the LU decomposition of the time-varying to be decomposed, L(t) ∈ Rn×n denotes a unit lower
matrix A(t) in real time) can accelerate the solving progress triangular matrix, and U(t) ∈ Rn×n denotes an upper
of the AoA localization problem. triangular matrix. Note that L(t) and U(t) are unknown and
Due to the powerful capabilities in time-varying in- time-varying matrices to be obtained.
formation processing, Zhang neural network (ZNN) It is worth noting that not all time-varying matrices have
method has been applied to solving various time-varying corresponding LU decompositions. In this paper, in order to
problems [16–18, 22–28]. For example, for finding real- simplify the problem, we only consider the situation that
time matrix square root, a finite-time convergent ZNN was A(t) is a diagonally dominant matrix. It means that (1) is
proposed in [16]. In order to solve the time-varying matrix solvable.
inversion problem, a novel discrete-time ZNN was pro-
posed in [17]. Therefore, the ZNN method is employed to
solve the time-varying LU decomposition problem in this 3. Continuous-Time Model
paper. In this section, by employing the ZNN method, a CTLUD
In the process of applying the ZNN method to solving model is proposed.
the time-varying problems [29–31], an error function is According to the previous work of the ZNN method, if
constructed firstly. Then, by using the ZNN design formula, we want to find a theoretical solution matrix T(t) of a time-
the error function is forced to converge to zero. Further- varying matrix problem, an appropriate error function E(t)
more, the continuous-time model for solving the original is indispensable, where E(t) � T(t) − R(t) indicates the
problem is obtained. In this paper, the corresponding model difference between T(t) and actual result matrix R(t). In this
is termed the continuous-time LU decomposition (CTLUD) paper, E(t) is constructed as follows:
model. In addition, by employing three one-step forward
finite difference formulas (i.e., Euler, 4-instant Zhang et al. E(t) � A(t) − L(t)U(t). (2)
discretization (ZeaD), and 8-instant ZeaD formulas) to
discretize the proposed CTLUD model, three corresponding Then, E(t) is adjusted dynamically by the following ZNN
discrete-time models are obtained. design formula:
The remainder of this paper is composed of six sections. _
E(t) � −λE(t), (3)
In Section 2, the problem formulation of the time-varying
LU decomposition is presented. In Section 3, the design which indicates that E(t) is forced to converge to zero matrix
process of the CTLUD model is presented. In Section 4, globally and exponentially, where E(t) _ denotes the first-
three discretization formulas are presented and corre- order time derivative of E(t). The design parameter
sponding discrete-time models are obtained. In Section 5, λ > 0 ∈ R, which is used to adjust the convergence speed. The
the proposed CTLUD model is employed to perform LU convergence speed becomes higher as the value of λ
decomposition of three time-varying matrices with different increases.
dimensions, and the corresponding results are shown. In By substituting (2) into (3), the following dynamic
Section 6, the experiment results of three discrete-time matrix equation is obtained:
models are presented. In Section 7, this paper is concluded.
Before ending the introduction part, the main contributions _
A(t) _ U(t) − L(t) U(t)
− L(t) _ � −λE(t), (4)
of this paper are recapped as follows:
_
where A(t), _
L(t), _
and U(t) denote the first-order time
(1) Different from static LU decomposition problem derivatives of A(t), L(t), and U(t), respectively.
analysis, this paper considers and analyzes the time- Note that some elements in L(t) and U(t) are known.
varying LU decomposition problem. Those known elements do not need to solve. Thus, we
(2) An effective CTLUD model is proposed by construct the following two vectors:
employing the ZNN method, Kronecker product, T
and vectorization techniques. ⎪
⎧
⎨ ζ(t) � l21 (t) l31 (t) · · · ln1 (t) l32 (t) · · · ln(n− 1) (t) ,
(3) Three discrete-time LU decomposition models are ⎪
⎩
u(t) � u11 (t) u12 (t) u22 (t) u13 (t) · · · unn (t) T ,
obtained by employing three discretization formulas
to discretize the proposed CTLUD model. (5)
(4) Experiment results substantiate that the proposed where ζ(t) ∈ R(1/2)n(n− 1)×1 and u(t) ∈ R(1/2)n(n+1)×1 are
models are effective for solving the time-varying LU composed by all unknown elements of L(t) and U(t), re-
decomposition problem. spectively. Meanwhile, lij (t) and uij (t) denote the elements
Journal of Mathematics 3
of L(t) and U(t) in row i and column j, respectively. The Ii ∈ Ri×i denotes an identity matrix. Matrix H is defined as
superscript T denotes the matrix transpose operator. follows:
Furthermore, we have the following two theorems about 2
then the following equation holds true: hi � 0n2 −i(n+1),i ; Ii ; 0in,i ∈ Rn ×i , (8)
vec(L(t)) � Hζ(t) + vec In , (6) and 0p,q ∈ Rp×q denotes a zero matrix. Note that h1 is a
vector.
where vec(·) denotes a column vector obtained by stacking all
column vectors of the operational matrix together and Proof. Let H multiply ζ(t) and we have
T
Hζ(t) � 0 l21 (t) l31 (t) · · · ln1 (t) 0 0 l32 (t) · · · ln n− 1 (t) · · · 0
2
(9)
� vec(D(t)) ∈ Rn ×1 ,
which is rewritten as
Theorem 2. If U(t) ∈ Rn×n is an upper triangular matrix,
_
vec(A(t) + λE(t))
then the following equation holds true:
_
vec(L(t))
vec(U(t)) � Ku(t), (11) � UT (t) ⊗ In In ⊗ L(t) ⎡⎣ ⎤⎦
_
vec(U(t)) (17)
where matrix K is defined as follows:
_
ζ(t)
� UT (t) ⊗ In H In ⊗ L(t)K ⎣⎡ ⎦⎤.
2
K � k1 k2 · · · kn−1 kn ∈ Rn ×(1/2)n(n+1) , (12)
_
u(t)
and ki is defined as
For simplification, we denote
2
ki � 0n(i−1),i ; Ii ; 0n2 −n(i−1)−i,i ∈ Rn ×i . (13) 2
_
v(t) � vec(A(t) + λE(t)) ∈ Rn ×1 ,
2 2
M(t) � UT (t) ⊗ In H In ⊗ L(t)K ∈ Rn ×n ,
(18)
Proof. The proof is similar to that of Theorem 1 and thus _
omitted. ζ(t) 2
_ � ⎡⎣
s(t) ⎤⎦ ∈ Rn ×1 .
According to previous statements, we have _
u(t)
⎧ vec(L(t)) � Hζ(t) + vec In ,
⎪
⎪ Therefore, we get
⎪
⎪
⎪
⎨ vec(U(t)) � Ku(t),
(14) _ � v(t).
M(t)s(t) (19)
⎪ _
⎪ vec(L(t)) _
� Hζ(t),
⎪
⎪
⎪
⎩ _
vec(U(t)) _
� Ku(t), Furthermore, we have the following CTLUD model:
_ and u(t)
where ζ(t) _ denote the first-order time derivatives of _ � M† (t)v(t), (20)
s(t)
ζ(t) and u(t), respectively.
Furthermore, the Kronecker product and vectorization where M† (t) denotes the pseudo-inverse matrix of M(t)
techniques are employed, which are formulated by the [34]. Meanwhile, by giving the initial value of s(t), ζ(t) and
following lemma [35, 36]. □ u(t) can be obtained via (20). According to Theorems 1 and
4 Journal of Mathematics
2, L(t) and U(t) are also obtained. Note that (20) is also a 4. Discrete-Time Models
neurodynamics model (which transforms a matrix de-
composition problem into a matrix differential equation In this section, for the convenience of digital hardware
problem), can obtain the solution in real time, and has the realization [34, 39–41], three discrete-time LU decomposi-
advantages of parallelizability [22, 36]. Those are also the tion models are proposed, discussed, and investigated. Note
advantages of the proposed CTLUD model. Besides, Figure 1 that the three discrete-time models are obtained by applying
shows the block diagram about the solving progress of the three corresponding discretization formulas.
proposed CTLUD model for the time-varying LU decom-
position problem.
Furthermore, we have the following theorem, which 4.1. Discretization Formulas. The Euler formula [42], which
discusses the convergence performance of the proposed is also a one-step-ahead discretization formula, is presented
CTLUD model. as follows:
s tk+1 − s tk
s_ tk � + O(η), (21)
Theorem 3. With A(t) being a smoothly time-varying η
matrix and M(t) being always nonsingular, the elements of
E(t) converge to zero globally and exponentially. which contains values of two time instants, where s(tk+i ) �
s((k + i)η) and η denotes the sampling gap. Besides, O(η)
denotes the first-order truncation error.
Proof. With eij (t) denoting the element of E(t) in row i and A 4-instant ZeaD formula [43], which contains values of
column j, where i � 1, 2, . . . , n and j � 1, 2, . . . , n. four time instants, is presented as follows:
According to (3), we have e_ij (t) � −λeij (t). Its solution is
2s tk+1 − 3s tk + 2s tk−1 − s tk−2
mathematically expressed as eij (t) � eij (0)exp(−λt). As s_ tk � + Oη2 ,
time t ⟶ ∞, eij (t) exponentially converges to zero. Then, 2η
E(t) globally and exponentially converges to zero matrix; (22)
that is, the solution of the CTLUD model globally and
exponentially converges to the theoretical solution. The where O(η2 ) denotes the second-order truncation error.
proof is thus completed. The last discretization formula employed in this paper is
The following remark discusses the computational an 8-instant ZeaD formula [44] and presented as follows:
complexity of the proposed CTLUD model. □ 200 1097 160
s_ tk � s t + s t − s t
483 k+1 9660 k 483 k−1
Remark 1. With using big O notation, the time complexity 20 20 55
of each step in obtaining the proposed CTLUD model is − s t + s t + s t (23)
69 k−2 483 k−3 1932 k−4
listed as follows. Note that only the highest order in each step
is concerned. 44 5 1
+ s t − s t + Oη4 ,
The time complexity of obtaining M(t): O(n ). 4 805 k−5 161 k−6 η
The time complexity of obtaining v(t): O(n2 ). where O(η4 ) denotes the fourth-order truncation error.
The time complexity of matrix multiplication opera-
tion: O(n4 ).
4.2. Corresponding Models. According to the three dis-
The time complexity of matrix pseudo-inverse opera-
cretization formulas presented in the last subsection, the
tion: O(n6 ).
following three equations are obtained:
Thus, in the case that we use the traditional serial nu-
merical algorithm, the total time complexity of the proposed s tk+1 � ηs_ tk + s tk + Oη2 , (24)
CTLUD model is O(n6 ). It is worth pointing out that the
proposed CTLUD model is a neurodynamics model and can 3 1
be implemented by parallel distributed processing. In the s tk+1 � ηs_ tk + s tk + s tk−1 + s tk−2 + Oη3 ,
2 2
case that we use parallel distributed processing, the time 483 393
complexities of obtaining M(t) and v(t) can be reduced to s tk+1 � ηs_ tk − s t
200 1433 k
O(n2 ) and O(n) [36], respectively. Besides, the capability 4 7 1
that the ZNN method can find the matrix pseudo-inverse in + s tk−1 + s tk−2 − s tk−3
5 10 10
parallel, has been revealed in many works [37, 38]. The time 11 33 3
complexities of matrix multiplication operation and matrix − s t − s t + s t + Oη5 ,
pseudo-inverse operation can be reduced to O(n2 ) by 160 k−4 250 k−5 40 k−6
sacrificing the space complexity [36]. (25)
Journal of Mathematics 5
Matrices
H and K
Kronecker
product
Differential
equation Reshape
Input A(t) + Vectorization solver
dA(t)/dt
technique
+
+
λ
–
Matrix
multiplication
operation Output L(t) and U(t)
Figure 1: Block diagram of the proposed CTLUD model.
3 1
s tk+1 ≐ ηs_ tk + s tk + s tk−1 + s tk−2 , (27)
2 2
483 393 4 7 1 11 33 3
s tk+1 �_ ηs_ tk − s tk + s tk−1 + s tk−2 − s tk−3 − s tk−4 − s tk−5 + s tk−6 , (28)
200 1433 5 10 10 160 250 40
where “�” _ denotes the computation assignment operator from Algorithm 1 shows the pseudocode describing the var-
its right-hand side to its left-hand side. In this paper, (26)–(28) ious steps of the proposed 4IZLUD model. As for the
are termed Euler discrete-time LU decomposition (EDTLUD) EDLUD model and the 8IDLUD model, their operation
model, 4-instant ZeaD LU decomposition (4IZLUD) model, processes are very similar to that of the 4IDLUD model and
and 8-instant ZeaD LU decomposition (8IZLUD) model, re- are thus omitted. The following remark shows the analyses
spectively. Besides, we have the following theorem. about the computational complexities of the proposed
discrete-time models. □
Theorem 4. With η ∈ (0, 1) denoting the sampling gap, the
EDTLUD, 4IZLUD, and 8IZLUD models are 0-stable, con- Remark 2. With using big O notation, the time complexity
sistent, and convergent, and their truncation error orders are of each step in obtaining the solution of the time-varying LU
O(η2 ), O(η3 ), and O(η5 ), respectively. decomposition is listed as follows. Note that only the highest
order in each step is concerned.
Proof. The EDTLUD model is chosen as an example. The _ k ), v(tk ):
The time complexity of obtaining E(tk ), A(t
characteristic polynomial of the EDTLUD model is 2
O(n ).
expressed as follows:
The time complexity of obtaining M(tk ): O(n4 ).
P(c) � c − 1. (29)
The time complexity of obtaining M† (tk ): O(n6 ).
Evidently, (29) has only one root on unit circle, that is, _ k ): O(n4 ).
The time complexity of obtaining s(t
c � 1. According to Lemma 2 in the appendix, the EDTLUD
The time complexity of obtaining s(tk+1 ): O(n2 ).
model is 0-stable. Moreover, based on (24), the EDTLUD
model has a truncation error of O(η2 ). Therefore, we come Thus, in the case that we use the traditional serial nu-
to the conclusion that the EDTLUD model is consistent and merical algorithm, the total time complexity of the proposed
convergent according to Lemmas 3–5 in the appendix. As for discrete-time models is O(n6 ). Similar to the statements in
the 4IZLUD model and the 8IZLUD model, the proofs are Remark 1, the total time complexity of the proposed dis-
similar to that of above, and they are thus omitted here. The crete-time models can also be reduced to O(n2 ) by sacri-
proof is completed. ficing the space complexity [36].
6 Journal of Mathematics
Data: Matrix A(t), initial values s(0), sampling gap η, design parameter λ, and final computation time tf .
Result: Matrices L(tk+1 ) and U(tk+1 ).
(1) s(t−1 ) and s(t−2 )←0;
(2) L(0) and U(0)←reshape s(0);
(3) k←0;
(4) while k < � tf /η do
(5) E(tk )←A(tk ) − L(tk )U(tk );
(6) _ k )←diff(A(tk ));
A(t
(7) v(tk )←vec(A(t_ k ) + λE(tk ));
(8) M(tk )← (UT (tk ) ⊗ In )H (In ⊗ L(tk ))K ;
(9) _ k )←M† (tk )v(tk );
s(t
(10) s(tk+1 )←ηs(t
_ k ) + 1.5s(tk ) + s(tk−1 ) + 0.5s(tk−2 );
(11) L(tk+1 ) and U(tk+1 )← reshape s(tk+1 );
(12) k←k + 1;
(13) end
1
⎢
⎡
⎢ 50 + sin(2t) cos(t) log(t + 1) cos(t)
cos(2t) ⎤⎥⎥ sin(3t)
⎢
⎢
⎢ t+1 ⎥⎥⎥
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢ ⎥
⎢
⎢
⎢
⎢
⎢ cos(t) 50 + 2t 2 sin(t + 2) 5 10 + sin(t) 3 ⎥⎥⎥⎥⎥
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ sin2 (t) 3 200 − cos(t) 3 sin(t) 200 − cos(t) 2
cos (t) t ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
A(t) � ⎢
⎢
⎢
⎢
⎢ t 2+
1
0 100 + sin(t) 3 cos(2t) 3t cos(t) ⎥⎥⎥⎥⎥.
⎥ (30)
⎢
⎢
⎢ t+1
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥
⎢
⎢
⎢
⎢ cos(t) 0 3 cos(2t) sin2 (t) 300 200 − cos(t) 3 sin(t) ⎥⎥⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ 50 + sin(2t) cos(t) log(t + 1) 1/(t + 1) cos(t) 500 + sin(3t) cos(2t) ⎥⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎥
⎢
⎢
⎢ ⎥⎥⎦
⎢
⎣
cos(t) 50 + 2t 2 sin(t + 2) 5 10 + sin(t) 300
5. Computer Simulations and Results of In this example, λ is set as 3 and computation time is
CTLUD Model limited to [0, tf ] s, where tf � 10 denotes the final compu-
tation time. In addition, the initial state of s(t) is set as
In this section, for verifying the effectiveness of the proposed
CTLUD model, three time-varying matrices are presented. T
s(0) � 2 1 2 3 ∈ R4×1 . (32)
Note that the three matrices have different dimensions, and
the initial state s(0) of the CTLUD model can be randomly In order to perform the LU decomposition of A(t), the
set. In this paper, we randomly use some integers to be the proposed CTLUD model is employed. The corresponding
initial state of the CTLUD model in all examples. Besides, the experiment results are shown in Figure 2. As seen from
corresponding experiment results are shown. Figure 2(a), the trajectory of ‖E(t)‖F is shown, where ‖E(t)‖F
denotes the Frobenius-norm of E(t). Evidently, ‖E(t)‖F
converges to 0 in a short time. At about 2 s, ‖E(t)‖F is already
5.1. Example 1. Firstly, in order to test the performance of near 0. Therefore, a conclusion is obtained that the 2-di-
the proposed CTLUD model on solving simple LU mensional time-varying LU decomposition problem (i.e.,
decomposition problem, the following 2-dimensional Example 1) is solved effectively by the proposed CTLUD
time-varying matrix is considered: model.
10 + sin(t) cos(2t) What is more, Figure 2(b) shows the trajectories of
A(t) � . (31) e11 (t), e21 (t), e12 (t), and e22 (t), where eij (t) denotes the
3 2 sin(t) + 10 element of E(t) in row i and column j. It can be evidently
Journal of Mathematics 7
10 10 1
8 5 0.5
6 0 0
0 5 10 0 5 10
t (s) t (s)
4 e11 (t) e21 (t)
0.5 4
2
0
2
0 –0.5
0 2 4 6 8 10
t (s) –1 0
0 5 10 0 5 10
||E (t)||F
t (s) t (s)
e12 (t) e22 (t)
(a) (b)
2 15
1.5 10
1 5
0.5 0
0 –5
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
l∗
11 (t) l11 (t) u11 (t) u11 (t)
l∗
21 (t) l21 (t) u∗
21 (t) u21 (t)
∗
l∗
12 (t) l12 (t) u12 (t) u12 (t)
∗
l22 (t) l22 (t) u∗ (t)
22 u22 (t)
(c) (d)
Figure 2: Experiment results synthesized by CTLUD model with λ � 3 and s(0) � 2 1 2 3 T for solving 2-dimensional time-varying LU
decomposition problem in Example 1. (a) Frobenius-norm trajectory of error matrix E(t). (b) Trajectories of e11 (t), e21 (t), e12 (t), and
e22 (t). (c) Trajectories of l∗11 (t), l∗21 (t), l∗12 (t), l∗22 (t), l11 (t), l21 (t), l12 (t), and l22 (t). (d) Trajectories of u∗11 (t), u∗21 (t), u∗12 (t), u∗22 (t), u11 (t),
u21 (t), u12 (t), and u22 (t).
seen that all elements of E(t) converge to 0 in a short time no In this example, because the design parameter λ is set as
matter eij (0) > 0 or eij (0) < 0. 3, the convergence speed is not high enough seemingly, but
In Figures 2(c) and 2(d), we show the element trajec- the convergence process is shown clearly. The experiment
tories of L(t) and U(t), where l∗ij (t) and u∗ij (t) denote the results with bigger λ are shown in Examples 2 and 3.
corresponding theoretical values of L(t) and U(t), respec-
tively. As seen from Figures 2(c) and 2(d), all elements of
L(t) and U(t) track their theoretical values. In other ex- 5.2. Example 2. Furthermore, in order to verify the validity
amples, the element trajectories of L(t) and U(t) are not of the proposed CTLUD model for the common LU
shown because the results are similar to that of Example 1 decomposition problem, we consider the following
except for convergence speed. 3-dimensional time-varying matrix:
8 Journal of Mathematics
250 200 2 1
100 1 0.5
200 0 0 0
0 5 10 0 5 10 0 5 10
t (s) t (s) t (s)
e11 (t) e21 (t) e31 (t)
150 2 100 2
0 50 0
100 –2 0 –2
0 5 10 0 5 10 0 5 10
t (s) t (s) t (s)
e12 (t) e22 (t) e32 (t)
50 1 5 10
0 0 5
0 –1 –5 0
0 2 4 6 8 10 0 5 10 0 5 10 0 5 10
t (s) t (s) t (s) t (s)
||E (t)||F e13 (t) e23 (t) e33 (t)
(a) (b)
Figure 3: Experiment results synthesized by CTLUD model with λ � 30 and s(0) � 0 1 0 1 2 3 4 5 6 T for solving 3-dimensional
time-varying LU decomposition problem in Example 2. (a) Frobenius-norm trajectory of error matrix E(t). (b) Trajectories of e11 (t), e21 (t),
e31 (t), e12 (t), e22 (t), e32 (t), e31 (t), e23 (t), and e33 (t).
4000 500
eij (t)
3500
0
3000
2500 –500
2000
1500 –1000
1000
–1500
500
0 –2000
0 2 4 6 8 10 0 2 4 6 8 10
t (s) t (s)
||E (t)||F
(a) (b)
Figure 4: Experiment results synthesized by CTLUD model with λ � 10 and s(0) � 1 2 · · · 21 1 2 · · · 28 T for solving 7-dimensional
time-varying LU decomposition problem in Example 3. (a) Frobenius-norm trajectory of error matrix E(t). (b) Trajectories of e11 (t), e21 (t),
. . ., e67 (t), and e77 (t).
102 105
100
100
10–2
10–4 10–5
10–6
10–10
10–8
10–10 10–15
0 200 400 600 800 1000 0 2000 4000 6000 8000 10000
k k
EDTLUD EDTLUD
4IZLUD 4IZLUD
8IZLUD 8IZLUD
(a) (b)
Figure 5: Residual errors of three discrete-time models with g � 0.03 and s(0) � 2 1 2 3 T for solving 2-dimensional time-varying LU
decomposition problem in Example 1. (a) When η � 0.01 s. (b) When η � 0.001 s.
6. Numerical Experiments and Results of For Example 1, E(tk ) � A(tk ) − L(tk )U(tk ) is the error
Discrete-Time Models matrix and the residual error is defined as ‖E(tk )‖F. In this
paper, g � ηλ is termed step length. In all experiments in-
In this section, in order to verify the effectiveness of volved in this section, g is set as 0.03. The results synthesized
three proposed discrete-time models, we use EDTLUD, by the three proposed discrete-time models with s(0) �
T
4IZLUD, and 8IZLUD models to solve three time-varying 2 1 2 3 , s(tk ) � 0 (k < 0), and tf � 10 are shown in
LU decomposition problems in the last section, respectively. Figure 5. Specifically, Figures 5(a) and 5(b) show the results
Besides, the numerical results verify the statements of with η � 0.01 and η � 0.001, respectively. Evidently, with
Theorem 3. different values of η, the residual errors of the three models
10 Journal of Mathematics
104 105
102
100
100
10–2 10–5
10–4
10–10
10–6
10–8 10–15
0 200 400 600 800 1000 0 2000 4000 6000 8000 10000
k k
EDTLUD EDTLUD
4IZLUD 4IZLUD
8IZLUD 8IZLUD
(a) (b)
Figure 6: Residual errors of three discrete-time models with g � 0.03 and s(0) � 0 1 0 1 2 3 4 5 6 T for solving 3-dimensional time-
varying LU decomposition problem in Example 2. (a) When η � 0.01 s. (b) When η � 0.001 s.
104 105
102
100
100
10–2 10–5
10–4
10–10
10–6
10–8 10–15
0 200 400 600 800 1000 0 2000 4000 6000 8000 10000
k k
EDTLUD EDTLUD
4IZLUD 4IZLUD
8IZLUD 8IZLUD
(a) (b)
Figure 7: Residual errors of three discrete-time models with g � 0.03 and s(0) � 1 2 · · · 21 1 2 · · · 28 T for solving 7-dimensional
time-varying LU decomposition problem in Example 3. (a) When η � 0.01 s. (b) When η � 0.001 s.
converge to near 0 but with different precision as time goes by. For Example 2 and Example 3, the numerical ex-
Compared with EDTLUD and 4IZLUD models, the 8IZLUD periment results synthesized by three models with
model has the highest precision for solving the time-varying s(0) � 0 1 0 1 2 3 4 5 6 T , s(0) � 1 2 · · · 21 1 2 · · · 28 T , and
LU decomposition problem. tf � 10 are shown in Figures 6 and 7, respectively.
It is worth mentioning that, as the value of η decreases by From Figures 6 and 7, we can obtain a conclusion
10 times, the maximal steady-state residual errors of three similar to the one above; that is, the 8IZLUD model has
models reduce by 102 , 103 , and 105 times, respectively, which the highest precision among the three discrete-time
coincides with Theorem 4. models.
Journal of Mathematics 11
[10] O. Jane, H. G. Ilk, and E. Elbase, “A secure and robust Transactions on Neural Networks and Learning Systems,
watermarking algorithm based on the combination of DWT, vol. 26, no. 6, pp. 1149–1160, 2015.
SVD, and LU decomposition with Arnold’s cat map ap- [27] S. Qin, X. Yang, X. Xue, and J. Song, “A one-layer recurrent
proach,” in Proceedings of the International Conference on neural network for pseudoconvex optimization problems with
Electrical and Electronics Engineering, pp. 306–310, Bursa, equality and inequality constraints,” IEEE Transactions on
Turkey, November 2013. Cybernetics, vol. 47, no. 10, pp. 3063–3074, 2017.
[11] K. Wang, X.-F. Gong, and Q.-H. Lin, “Complex non-orthogonal [28] N. Liu and S. Qin, “A neurodynamic approach to nonlinear
joint diagonalization based on LU and LQ decompositions,” optimization problems with affine equality and convex in-
Latent Variable Analysis and Signal Separation, vol. 7191, equality constraints,” Neural Networks, vol. 109, pp. 147–158,
pp. 50–57, 2012. 2019.
[12] K. He, S. X.-D. Tan, H. Wang, and G. Shi, “GPU-accelerated [29] L. Jin, S. Li, and B. Hu, “RNN models for dynamic matrix
parallel sparse LU factorization method for fast circuit inversion: a control-theoretical perspective,” IEEE Transactions
analysis,” IEEE Transactions on Very Large Scale Integration on Industrial Informatics, vol. 14, no. 1, pp. 189–199, 2018.
(VLSI) Systems, vol. 24, no. 3, pp. 1140–1150, 2016. [30] J. Li, Y. Zhang, and M. Mao, “General square-pattern dis-
[13] R. Kastner, “An LU-spatial decomposition method,” in cretization formulas via second-order derivative elimination
Proceedings of the Antennas and Propagation Society Inter- for zeroing neural network illustrated by future optimization,”
national Symposium, pp. 864–867, Ann Arbor, MI, USA, June IEEE Transactions on Neural Networks and Learning Systems,
1993. vol. 30, no. 3, pp. 891–901, 2019.
[14] T. Yigal and K. Raphael, “A recursive LU decomposition [31] L. Jin, S. Li, H. Wang, and Z. Zhang, “Nonconvex projection
algorithm,” Microwave and Optical Technology Letters, vol. 7, activated zeroing neurodynamic models for time-varying
no. 2, pp. 48–52, 1994. matrix pseudoinversion with accelerated finite-time conver-
[15] M. I. Bueno and C. R. Johnson, “Minimum deviation, quasi- gence,” Applied Soft Computing, vol. 62, pp. 840–850, 2018.
LU factorization of nonsingular matrices,” Linear Algebra and [32] R. A. Horn and C. R. Johnson, Topics in Matrix Analysis,
its Applications, vol. 427, no. 1, pp. 99–118, 2007. Cambridge University Press, Cambridge, UK, 1991.
[16] L. Xiao, “A finite-time convergent Zhang neural network and [33] Y. Zhang, L. Ming, H. Huang, J. Chen, and Z. Li, “Time-
its application to real-time matrix square root finding,” Neural varying Schur decomposition via Zhang neural dynamics,”
Computing and Applications, vol. 31, no. S2, pp. 793–800, 2019. Neurocomputing, vol. 419, pp. 251–258, 2020.
[17] D. Guo, Z. Nie, and L. Yan, “Novel discrete-time Zhang neural [34] J. Chen and Y. Zhang, “Online singular value decomposition
network for time-varying matrix inversion,” IEEE Transac- of time-varying matrix via zeroing neural dynamics,” Neu-
tions on Systems, Man, and Cybernetics: Systems, vol. 47, no. 8, rocomputing, vol. 383, pp. 314–323, 2020.
pp. 2301–2310, 2017. [35] Y. Zhang, D. Jiang, and J. Wang, “A recurrent neural network
[18] Y. Zhang and C. Yi, Zhang Neural Networks and Neural- for solving Sylvester equation with time-varying coefficients,”
Dynamic Method, Nova Science Press, New York, NY, USA, IEEE Transactions on Neural Networks, vol. 13, no. 5,
2011. pp. 1053–1063, 2012.
[19] S. Kawakami and T. Ohtsuki, “Localization using iterative [36] Y. Zhang, X. Liu, Y. Ling, M. Yang, and H. Huang, “Con-
angle of arrival method sharing snapshots of coherent sub- tinuous and discrete Zeroing dynamics models using JMP
arrays,” EURASIP Journal on Advances in Signal Processing, function array and design formula for solving time-varying
vol. 46, pp. 1–7, 2011. Sylvester-transpose matrix inequality,” Numerical Algorithms,
[20] A. Noroozi, A. H. Oveis, S. M. Hosseini, and M. A. Sebt, vol. 86, no. 4, pp. 1591–1614, 2021.
“Improved algebraic solution for source localization from [37] B. Liao and Y. Zhang, “From different ZFs to different ZNN
TDOA and FDOA measurements,” IEEE Wireless Commu- models accelerated via Li activation functions to finite-time
nications Letters, vol. 7, no. 3, pp. 352–355, 2018. convergence for time-varying matrix pseudoinversion,”
[21] A. G. Dempster and E. Cetin, “Interference localization for Neurocomputing, vol. 133, pp. 512–522, 2014.
satellite navigation systems,” Proceedings of the IEEE, vol. 104, [38] Y. Zhang, Y. Yang, N. Tan, and B. Cai, “Zhang neural network
no. 6, pp. 1318–1326, 2016. solving for time-varying full-rank matrix Moore-Penrose
[22] S. Qiao, X.-Z. Wang, and Y. Wei, “Two finite-time convergent inverse,” Computing, vol. 92, no. 2, pp. 97–121, 2011.
Zhang neural network models for time-varying complex [39] Y. Zhang, M. Yang, H. Huang, M. Xiao, and H. Hu, “New
matrix Drazin inverse,” Linear Algebra and its Applications, discrete-solution model for solving future different-level
vol. 542, pp. 101–117, 2018. linear inequality and equality with robot manipulator con-
[23] L. Xiao, B. Liao, S. Li, and K. Chen, “Nonlinear recurrent trol,” IEEE Transactions on Industrial Informatics, vol. 15,
neural networks for finite-time solution of recurrent neural no. 4, pp. 1975–1984, 2019.
networks for finite-time solution of general time-varying [40] Y. Zhang, L. He, C. Hu, J. Guo, J. Li, and Y. Shi, “General four-
linear matrix equations,” Neural Networks, vol. 98, pp. 102– step discrete-time Zeroing and derivative dynamics applied to
113, 2017. time-varying nonlinear optimization,” Journal of Computa-
[24] D. Guo, Z. Nie, and L. Yan, “Theoretical analysis, numerical tional and Applied Mathematics, vol. 347, pp. 314–329, 2019.
verification and geometrical representation of new three-step [41] C. Hu, X. Kang, and Y. Zhang, “Three-step general
DTZD algorithm for time-varying nonlinear equations discrete-time Zhang neural network design and application to
solving,” Neurocomputing, vol. 214, pp. 516–526, 2016. time-variant matrix inversion,” Neurocomputing, vol. 306,
[25] N. Liu and S. Qin, “A novel neurodynamic approach to pp. 108–118, 2018.
constrained complex-variable pseudoconvex optimization,” [42] Y. Zhang, M. Zhu, C. Hu, J. Li, and M. Yang, “Euler-precision
IEEE Transactions on Cybernetics, vol. 49, no. 11, pp. 3946– general form of Zhang et al. discretization (ZeaD) formulas,
3956, 2019. derivation, and numerical experiments,” in Proceedings of the
[26] S. Qin and X. Xue, “A two-layer recurrent neural network for Chinese Control and Decision Conference, pp. 6262–6267,
nonsmooth convex optimization problems,” IEEE Shenyang, China, June 2018.
Journal of Mathematics 13