Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Next Article in Journal
Evidence of an Absence of Inbreeding Depression in a Wild Population of Weddell Seals (Leptonychotes weddellii)
Previous Article in Journal
Cryptanalyzing and Improving an Image Encryption Algorithm Based on Chaotic Dual Scrambling of Pixel Position and Bit
 
 
Font Type:
Arial Georgia Verdana
Font Size:
Aa Aa Aa
Line Spacing:
Column Width:
Background:
Article

Finite-Time H Control for Time-Delay Markovian Jump Systems with Partially Unknown Transition Rate via General Controllers

1
College of Mathematics and Systems Science, Shandong University of Science and Technology, Qingdao 266590, China
2
Department of Electrical Engineering and Information Technology, Shandong University of Science and Technology, Jinan 250031, China
3
Department of Fundamental Courses, Shandong University of Science and Technology, Jinan 250031, China
*
Author to whom correspondence should be addressed.
These authors contributed equally to this work.
Entropy 2023, 25(3), 402; https://doi.org/10.3390/e25030402
Submission received: 30 January 2023 / Revised: 18 February 2023 / Accepted: 20 February 2023 / Published: 22 February 2023
(This article belongs to the Topic Advances in Nonlinear Dynamics: Methods and Applications)

Abstract

:
This paper deals with the problems of finite-time boundedness (FTB) and H FTB for time-delay Markovian jump systems with a partially unknown transition rate. First of all, sufficient conditions are provided, ensuring the FTB and H FTB of systems given by linear matrix inequalities (LMIs). A new type of partially delay-dependent controller (PDDC) is designed so that the resulting closed-loop systems are finite-time bounded and satisfy a given H disturbance attenuation level. The PDDC contains both non-time-delay and time-delay states, though not happening at the same time, which is related to the probability distribution of the Bernoulli variable. Furthermore, the PDDC is extended to two other cases; one does not contain the Bernoulli variable, and the other experiences a disordering phenomenon. Finally, three numerical examples are used to show the effectiveness of the proposed approaches.

1. Introduction

In actual industrial processes, the transient performance of systems is sometimes particularly important. For example, aircraft control systems require that the states not exceed a given limit [1]; the temperature of a chemical reaction needs to be strictly controlled within a certain range [2]; the angular location of a robot arm should be limited to a particular scope [3]. In recent years, an increasing number of academics have focused on the finite-time stability (FTS) problem. Different from the traditional Lyapunov stability [4,5,6,7], FTS discusses the transient performance of systems in the finite-time interval. In fact, the stable systems in the Lyapunov sense may have very bad transient performances, such as severe oscillation. The definition of FTS (or short-time stability [8]) was first proposed by Kamenkov in [9]. According to FTS, a system state is limited to a certain critical value within a certain time region, if the initial state is norm bounded. The authors of [10] extended FTS to the concept of FTB and took external disturbances into account. The studies of FTS and FTB have been further developed with the evolution of LMI theory [11,12,13,14,15,16,17,18,19]. For example, in [11], sufficient conditions for FTB of closed-loop systems were given in the form of LMIs by designing a dynamic feedback controller. Meanwhile, finite-time H control/filtering problems [20,21,22,23,24] have received much attention in order to reduce influences on a system caused by external disturbances.
On the other hand, abrupt changes are often encountered in the industrial process due to a component fault, invalidation, an associated change between subsystems, a sudden environmental disturbance [25], and so on. The occurrence of these situations causes the structure and parameters of a system to switch between various subsystems, such as networked control systems or power electrics. The Markov jump system [26] is used to deal with this kind of practical system via the transition probabilities of the jump process. In recent decades, many researchers have performed studies on these types of systems, such as [27,28,29,30,31,32]. Moreover, some research results have been utilized in many engineering fields, such as power systems [33], manufacturing systems [34], communication systems [35], etc. Although there have been lots of research achievements about Markov jump systems, most assume that the transition probabilities are all known; however, it is difficult to ascertain precise transition probabilities in real life due to instrument and measurement limitations. Therefore, further research on a Markov jump system with a partially unknown transition rate is really vital and necessary. Readers may refer to [36,37,38].
Time delay, as an inevitable phenomenon, widely exists in communication [39], the chemical industry [40], transportation [41], and other systems. The existence of a time delay may make the performance of a system deteriorate, destroy the balance and stability of systems, and even produce a chaos phenomenon. This leads to the development of and changes to the systems such that they depend not only on the present state but also on the previous state [42,43,44,45,46]. By studying the FTS of various time-delay systems [47,48,49,50], it is found that most of the results mainly consider the controller with or without a time delay. However, in practice, data transmission events with or without a delay occur randomly, which inspires us to consider designing a controller with both a time delay and no time delay, or non-simultaneous occurrences according to probability.
In this paper, we handle the FTB and H FTB problems of time-delay Markov jump systems with a partially unknown transition rate via some general PDDCs. The following are the main contributions: (1) With LMIs, we give sufficient conditions for FTB for the defined system. (2) A new kind of PDDC is designed to make the resulting system H FTB. The PDDC contains both non-time-delay and time-delay states; however, these are not happening at the same time. In comparison to conventional state feedback controllers [47,48,49,51], the probability distributions play an important role in the PDDC. (3) Different from the existing results of [46], the PDDC is extended to two new cases: one does not contain the Bernoulli variable, and the other experiences a disordering phenomenon.
The rest of this paper is arranged as follows: In Section 2, the preparation and problem statement are presented. Section 3 discusses the main results for the FTB and H FTB of the system defined by LMIs via the PDDC’s design. Three examples are given to show the effectiveness of the obtained results in Section 4. Some conclusions are given in Section 5.
Notation: λ max ( Q ) ( λ min ( Q ) ) means the maximum (minimum) eigenvalue of a real symmetric matrix Q; E [ · ] refers to the mathematical expectation operator; the superscript T is the transposition of the matrix. In the matrices, diag { } stands for the block-diagonal matrix, the symbol ∗ is the symmetric term of a matrix, and ( P ) 🟉 = P + P T . The σ -algebras of the sample space subsets are represented by F . Pr denotes the mathematic probability.

2. Problem Statement and Preliminaries

Consider a linear time-delay Itô stochastic Markovian switching system
{ d x ( t ) = [ S ( σ t ) x ( t ) + S τ ( σ t ) x ( t τ ) + L ( σ t ) u ( t ) + G ( σ t ) v ( t ) ] d t + [ U ( σ t ) x ( t ) + U τ ( σ t ) x ( t τ ) + J ( σ t ) u ( t ) + F ( σ t ) v ( t ) ] d ω ( t ) , z ( t ) = H ( σ t ) x ( t ) + H τ ( σ t ) x ( t τ ) + D ( σ t ) v ( t ) , t [ 0 , T ˜ ] , x ( t ) = ψ ( t ) ,   σ t = σ 0 ,   t [ τ ,   0 ] ,
where x ( t ) R n is the system state, u ( t ) R m is the control input, and z ( t ) R q is the control output. S ( σ t ) , S τ ( σ t ) , L ( σ t ) , G ( σ t ) , U ( σ t ) , U τ ( σ t ) , J ( σ t ) , F ( σ t ) , H ( σ t ) , H τ ( σ t ) , and D ( σ t ) are constant matrices, for simplicity. When σ t = i , they are denoted as S i , S τ i , L i , G i , U i , U τ i , J i , F i , H i , H τ i , and D i . The time delay is τ 0 . The continuous vector-valued function ψ ( t ) is defined on [ τ , 0 ] ; ω ( t ) is the standard one-dimensional Wiener process defined on the probability space ( Ω , F , P ) satisfying E [ d ω ( t ) ] = 0 , E [ d 2 ω ( t ) ] = d t ; and v ( t ) is the external disturbance satisfying
0 t v T ( s ) v ( s ) d s < d 2 , d > 0 .
The transition rate of the Markovian process { σ t , t 0 } is given by
P r ( σ t + t = j | σ t = i ) = π i j t + o ( t ) , i j , 1 + π i i t + o ( t ) , i = j ,
where { σ t , t 0 } takes the values in S = { 1 , 2 , , N } , o ( t ) is the order of t that satisfies t > 0 , a n d lim t 0 o ( t ) t = 0 . π i j 0 ( i j , i , j S ) is the transition rate of σ ( t ) from the mode i at the time t to the mode j at the time t + t , such that π i i = j i π i j . All of the transition rates π i j , i , j S , can be collected into the following transition rate matrix
Π = π 11 π 12 π 1 N π 21 π 22 π 2 N π N 1 π N 2 π N N .
Assume that the transition rate is partially unknown, for example, there is a 2 × 2 transition rate matrix
Π 1 = π 11 π 12 ? ?
where “?” is an unknown element and π i j is known. For all π i j S , define S = L k i + L u k i , where
L k i = { j :   if   π i j   is   known } , L u k i = { j :   if   π i j   is   unknown } .
If L k i is non-empty, it is described as follows
L k i = { k 1 i ,   k 2 i ,   ,   k m i , } , 0 m N ,
where k m i S denotes the mth known element in the matrix Π ’s ith row.
Definition 1 (FTB).
For the given scalars c 2 > c 1 > 0 , T ˜ > 0 and the matrix R i > 0   ( i S ) , system (1) with u ( t ) = 0 is FTB with respect to ( c 1 , c 2 , T ˜ , R i , d ) , if
E [ x T ( t 1 ) R i x ( t 1 ) ] c 1 E [ x T ( t 2 ) R i x ( t 2 ) ] < c 2 ,
and (2) holds, where t 1 [ τ , 0 ] , t 2 [ 0 , T ˜ ] .
Remark 1.
FTB can be simplified to FTS with respect to ( c 1 , c 2 , T ˜ , R i ) when v ( t ) = 0 . The FTB/FTS can be used to solve some practical problems, such as the chemical reaction process, electronic circuit systems, and medicine. For example, the body’s normal systolic blood pressure is 90–140 mmHg. If the body’s systolic blood pressure is greater than 140 mmHg, then one suffers from high blood pressure disease. One must take blood pressure medicine.
Definition 2 (H FTB).
For the given scalar γ > 0 , system (1) with u ( t ) = 0 is H FTB with respect to ( c 1 , c 2 , T ˜ , R i , d , γ ) . If system (1) is FTB and under zero initial condition, for any non-zero disturbance v ( t ) , the control output z ( t ) satisfies
E [ 0 T ˜ z T ( t ) z ( t ) d t ] < γ 2 E [ 0 T ˜ v T ( t ) v ( t ) d t ] .
When the control problem is considered, the following definition is needed.
Definition 3 (H FTB stabilization).
System (1) is finite-time H stabilizable if there exists a controller u ( t ) such that the resulting closed-loop system is H FTB.
Lemma 1
(Gronwall–Bellman inequality [52,53]). Let g ( t ) be a nonnegative continuous function. If there are positive constants r , q such that
g ( t ) r + q 0 t g ( s ) d s , 0 t T ˜ ,
then
g ( t ) r exp ( q t ) , 0 t T ˜ .
Remark 2.
Lemma 1 can be reformulated with sharp inequalities. The proof is given in Appendix A.
Lemma 2
(Schur’s complement lemma [54]). For the real matrix H, the real symmetric matrix S, and the positive-definite matrix U, the below inequalities are equivalent:
S + H U 1 H T < 0
and
S H H T U < 0 .

3. Main Results

Firstly, we discuss the FTB problem for system (1) (when u ( t ) = 0 ) in this section.
Theorem 1.
System (1) (when u ( t ) = 0 ) is FTB with respect to ( c 1 , c 2 , T ˜ , R i , d ) , if for a real scalar η 0 , there exist the scalars λ i 1 > 0 , λ i 2 > 0 , symmetric matrices P i > 0 , Q i > 0 , and O i > 0 satisfying
Ψ i 1 P i S τ i P i G i U i T * O i 0 U τ i T * * Q i F i T * * * P i 1 < 0 ,
O i < R i ,
λ i 1 I < P ¯ i < λ i 2 I ,
c 1 ( λ i 2 + τ ) + λ max ( Q i ) d 2 < c 2 e x p ( η T ˜ ) λ i 1 ,
where
Ψ i 1 = j L u k i π i j [ ( P i S i ) 🟉 + O i + P j ] + ζ k i + ( 1 + π k i ) [ ( P i S i ) 🟉 + O i ] , ζ k i = j L k i π i j P j , π k i = j L k i π i j , P ¯ i = R i 1 2 P i R i 1 2 .
Proof. 
For system (1), we choose a stochastic Lyapunov functional as
V ( x t , σ t ) = x T ( t ) P ( σ t ) x ( t ) + t τ t x T ( s ) O ( σ t ) ( s ) x ( s ) d s .
For each σ t = i S , let L be the differential generating operator of system (1). According to the Itô formula, it follows that
L V ( x t , σ t = i ) = x T ( t ) [ ( P i S i ) 🟉 + j = 1 N π i j P j + O i ] x ( t ) + [ Ξ ] T P i [ Ξ ] x T ( t τ ) O i x ( t τ ) + 2 x T ( t ) P i S τ i x ( t τ ) + 2 x T ( t ) P i G i v ( t ) = x T ( t ) [ ( P i S i ) 🟉 + O i + j L u k i π i j P j + ζ k i + j L k i π i j ( ( P i S i ) 🟉 + O i ) ] x ( t ) + 2 x T ( t ) P i [ S τ i x ( t τ ) + G i v ( t ) ] + [ Ξ ] T P i [ Ξ ] x T ( t τ ) O i x ( t τ ) = x T ( t ) [ ( 1 + π k i ) ( ( P i S i ) 🟉 + O i ) + ζ k i + j L u k i π i j ( ( P i S i ) 🟉 + O i + P j ) ] x ( t ) + 2 x T ( t ) P i [ S τ i x ( t τ ) + G i v ( t ) ] + [ Ξ ] T P i [ Ξ ] x T ( t τ ) O i x ( t τ )
where Ξ = F i v ( t ) + U i x ( t ) + U τ i x ( t τ ) .
From (8) and (13), it is easy to obtain
L V ( x t , σ t = i ) < η V 1 ( x t , σ t = i ) + v T ( t ) Q i v ( t ) , t [ 0 , T ˜ ] ,
where V 1 ( x t , σ t = i ) = x T ( t ) P i x ( t ) , so
L V ( x t , σ t = i ) < η V ( x t , σ t = i ) + λ max ( Q i ) v T ( t ) v ( t ) .
Integrating both sides of (14) from 0 to t ( t [ 0 , T ˜ ] ) yields
V ( x t , σ t = i ) V ( x 0 , σ 0 ) < η 0 t V ( x s , σ s ) d s + λ max ( Q i ) 0 t v T ( s ) v ( s ) d s .
Taking the mathematical expectation on both sides of (15), the following is concluded
E [ V ( x t , σ t = i ) ] E [ V ( x 0 , σ 0 ) ] < η E [ 0 t V ( x s , σ s ) d s ] + λ max ( Q i ) E [ 0 t v T ( s ) v ( s ) d s ] ,
i.e.,
E [ V ( x t , σ t = i ) ] < E [ V ( x 0 , σ 0 ) ] + η 0 t E [ V ( x s , σ s ) ] d s + λ max ( Q i ) E [ 0 t v T ( s ) v ( s ) ] d s .
Applying Lemma 1 or the Gronwall–Bellman-type inequality for the three functions [55] to (16) yields
E [ V ( x t , σ t = i ) ] < E [ V ( x 0 , σ 0 ) ] exp ( η t ) + λ max ( Q i ) E [ 0 t v T ( s ) v ( s ) d s ] exp ( η t ) .
Set λ ˘ i = min i S λ min ( P ¯ i ) and λ ^ i = max i S λ max ( P ¯ i ) . Together with (10), we have
E [ V ( x t , σ t = i ) ] = E [ t τ t x T ( s ) O i x ( s ) d s ] + E [ V 1 ( x t , σ t = i ) ] E [ V 1 ( x t , σ t = i ) ] λ ˘ i E [ x T ( t ) R i x ( t ) ] λ i 1 E [ x T ( t ) R i x ( t ) ] ,
E [ V ( x 0 , σ 0 = i ) ] exp ( η t ) = E [ x T ( 0 ) R i 1 2 P ¯ i R i 1 2 x ( 0 ) ] exp ( η t ) + E [ τ 0 x T ( s ) O i x ( s ) d s ] exp ( η t ) c 1 ( λ ^ i + τ ) exp ( η T ˜ ) c 1 ( λ i 2 + τ ) exp ( η T ˜ ) ,
λ max ( Q i ) E [ 0 t v T ( s ) v ( s ) d s ] exp ( η t ) < λ max ( Q i ) d 2 exp ( η T ˜ ) .
From conditions (17) to (20), it is derived
E [ x T ( t ) R i x ( t ) ] exp ( η T ˜ ) [ ( τ + λ i 2 ) c 1 + λ max ( Q i ) d 2 λ i 1 ] .
For all t [ 0 , T ˜ ] , E [ x T ( t ) R i x ( t ) ] < c 2 holds, which is obtained by
[ c 1 ( λ i 2 + τ ) + λ max ( Q i ) d 2 ] exp ( η T ˜ ) λ i 1 1 < c 2 ,
which is (11). The proof is complete. □
Remark 3.
If F i = G i = 0 , then Theorem 1 is reduced to Theorem 1 in [29].
In the following, we propose three novel types of partially delay-dependent controllers. One of the controllers is
u ( t ) = ( 1 δ ( t ) ) K τ ( σ t ) x ( t τ ) + δ ( t ) K ( σ t ) x ( t ) ,
where K τ ( σ t ) and K ( σ t ) represent the control gains, and δ ( t ) is the Bernoulli variable defined as
δ ( t ) = 1 , if   x ( t )   is   available , 0 , if   x ( t τ )   is   available ,
and satisfies
P r { δ ( t ) = 1 } = δ , P r { δ ( t ) = 0 } = 1 δ .
Furthermore,
E [ ( δ ( t ) δ ) 2 ] = δ ( 1 δ ) = β 2 , E [ δ ( t ) δ ] = 0 .
Substituting (22) in (1), we have
d x ( t ) = [ S ^ ( σ t ) x ( t ) + S ^ τ ( σ t ) x ( t τ ) + G ( σ t ) v ( t ) + ( δ ( t ) δ ) W ( σ t ) ] d t + [ U ^ ( σ t ) x ( t ) + U ^ τ ( σ t ) x ( t τ ) + F ( σ t ) v ( t ) + ( δ ( t ) δ ) Z ( σ t ) ] d ω ( t ) , z ( t ) = H ( σ t ) x ( t ) + H τ ( σ t ) x ( t τ ) + D ( σ t ) v ( t ) , t [ 0 , T ˜ ] , x ( t ) = ψ ( t ) , σ t = σ 0 , t [ τ , 0 ] ,
where
S ^ ( σ t ) = S ( σ t ) + δ L ( σ t ) K ( σ t ) , S ^ τ ( σ t ) = S τ ( σ t ) + ( 1 δ ) L ( σ t ) K τ ( σ t ) , W ( σ t ) = L ( σ t ) K τ ( σ t ) x ( t τ ) + T ( σ t ) K ( σ t ) x ( t ) , U ^ ( σ t ) = U ( σ t ) + δ J ( σ t ) K ( σ t ) , U ^ τ ( σ t ) = U τ ( σ t ) + ( 1 δ ) J ( σ t ) K τ ( σ t ) , Z ( σ t ) = J ( σ t ) K τ ( σ t ) x ( t τ ) + J ( σ t ) K ( σ t ) x ( t ) .
The following theorem gives the sufficient condition of H FTB for the closed-loop system (23) via controller (22).
Theorem 2.
System (23) is H FTB with respect to ( c 1 , c 2 , T ˜ , R i , d , γ ) , if for a real scalar η 0 , there exist the scalars γ > 0 , λ i 1 > 0 , λ i 2 > 0 , matrices X i > 0 , O ¯ i > 0 , and Y i , Y τ i satisfying
Ψ ˜ i 1 Ψ ˜ i 2 G i X i H i T Ψ ˜ i 3 Ψ ˜ i 4 X i * Ψ ˜ i 5 0 H τ i T Ψ ˜ i 6 Ψ ˜ i 7 0 * * γ 2 I D i F i T 0 0 * * * I 0 0 0 * * * * X i 0 0 * * * * * X i 0 * * * * * * O ¯ i < 0 ,
R i 1 < O ¯ i ,
λ i 1 R i 1 2 * X i < 0 ,
2 R i 1 2 + X i + λ i 1 I < 0 ,
c 1 ( λ i 2 + τ ) + γ 2 d 2 < c 2 e η T ˜ λ i 1 ,
where
Ψ ˜ i 1 = ( 1 + X i π k i ) ( S i X i + δ L i Y i ) 🟉 + X i ζ k i X i T + X i j L u k i π i j [ ( S i X i + δ L i Y i ) 🟉 + X j 1 X i T ] η X i , Ψ ˜ i 2 = S τ i X i + ( 1 δ ) L i Y τ i , Ψ ˜ i 3 = X i U i T + δ Y i T J i T , Ψ ˜ i 4 = β Y i T J i T , Ψ ˜ i 5 = 2 X i + O ¯ i , Ψ ˜ i 6 = X i U τ i T + ( 1 δ ) Y τ i T J i T , Ψ ˜ i 7 = β Y τ i T J i T .
Moreover, the gains of controller (22) are
K i = Y i X i 1 , K τ i = Y τ i X i 1 .
Proof. 
Choosing the Lyapunov functional (12) for system (23), we obtain
L V ( x t , σ t = i ) = x T ( t ) [ ( P i S ^ i ) 🟉 + j = 1 N π i j P j ] x ( t ) + 2 x T ( t ) P i G i v ( t ) + 2 x T ( t ) P i S ^ τ i x ( t τ ) + x T ( t ) O i x ( t ) x T ( t τ ) O i x ( t τ ) + β 2 Z i T P i Z i + Ξ ˜ T P i Ξ ˜ = x T ( t ) [ ( 1 + π k i ) ( P i S ^ i ) 🟉 + ζ k i + j L u k i π i j ( ( P i S ^ i ) 🟉 + P j ) + O i ] x ( t ) + 2 x T ( t ) P i G i v ( t ) + β 2 Z i T P i Z i + Ξ ˜ T P i Ξ ˜ x T ( t τ ) O i x ( t τ ) + 2 x T ( t ) P i S ^ τ i x ( t τ ) ,
where Ξ ˜ = F i v ( t ) + U ^ i x ( t ) + U ^ τ i x ( t τ ) .
Let O ¯ i = O i 1 per (24) and the following inequality
X i O i X i X i X i + O ¯ i , .
The following result is obtained
Ψ ˜ i 1 Ψ ˜ i 2 G i X i H i T Ψ ˜ i 3 Ψ ˜ i 4 X i * Ψ ´ i 5 0 H τ i T Ψ ˜ i 6 Ψ ˜ i 7 0 * * γ 2 I D i F i T 0 0 * * * I 0 0 0 * * * * X i 0 0 * * * * * X i 0 * * * * * * O ¯ i < 0 ,
where Ψ ´ i 5 = X i O i X i .
By pre- and post-multiplying both sides of (31), respectively, by diag { X i 1 , X i 1 , I , I , I , I , I } and diag { X i 1 , X i 1 , I , I , I , I , I } T ; denoting X i = P i 1 , Y i = K i X i , Y τ i = K τ i X i ; and according to Lemma 2, one obtains
Π i 1 Π i 2 Π i 3 * γ 2 I + F i T P i F i D i * * I < 0 ,
where
Π i 1 = Ω ^ i 1 Ω ^ i 2 * Ω ^ i 3 , Ω ^ i 1 = ( 1 + π k i ) ( P i S i + δ P i L i K i ) 🟉 + ζ k i + j L u k i π i j ( ( P i S i + δ P i L i K i ) 🟉 + P j ) + O i η P i + ( U i + δ J i K i ) T P i ( U i + δ J i K i ) + β 2 ( J i K i ) T P i ( J i K i ) , Ω ^ i 2 = P i S τ i + P i ( L i K τ i ) + β 2 ( J i K i ) T P i ( J i K τ i ) + ( U i + δ J i K i ) T P i ( U τ i + ( 1 δ ) J i K τ i ) , Ω ^ i 3 = O i + β 2 ( J i K τ i ) T P i ( J i K τ i ) + ( U τ i + δ J i K τ i ) T P i ( U τ i + ( 1 δ ) J i K τ i ) , Π i 2 = P i G i + ( U i + δ J i K i ) T P i F i ( U τ i + ( 1 δ ) J i K τ i ) T P i F i , Π i 3 = H i H τ i T .
By pre- and post-multiplying (32) by diag [ x T ( t ) x T ( t τ ) v T ( t ) z T ( t ) ] and its transpose, respectively, and comparing it with (29), it is seen that
L V ( x t , σ t = i ) < η V 1 ( x t , σ t = i ) + γ 2 v T ( t ) v ( t ) z T ( t ) z ( t ) .
Then, one has
L V ( x t , σ t = i ) < η V ( x t , σ t = i ) + γ 2 v T ( t ) v ( t ) z T ( t ) z ( t ) .
Under zero initial condition, taking mathematical expectation, and integrating both sides of (33) from 0 to t ( t [ 0 , T ˜ ] ) , by applying Lemma 1, it is deduced that
E [ V ( x t , σ t = i ) ] < e η T ˜ { γ 2 E [ 0 T ˜ v T ( t ) v ( t ) d t ] E [ 0 T ˜ z T ( t ) z ( t ) d t ] } .
It is also clear that (34) implies
E [ 0 T ˜ z T ( t ) z ( t ) d t ] < γ 2 E [ 0 T ˜ v T ( t ) v ( t ) d t ] .
By (33), we obtain
L V ( x t , σ t = i ) < η V ( x t , σ t = i ) + γ 2 v T ( t ) v ( t ) .
Because of R i > 0 , it is easy to see that (9) is the actual condition (26). For (10), it is equivalent to P ¯ i < λ i 2 I and P ¯ i < λ i 2 I , that is,
λ i 2 I + R i 1 2 P i R i 1 2 < 0 ,
and
λ i 1 I R i 1 2 P i R i 1 2 < 0 .
According to Lemma 2, (26) is equivalent to (36), and (37) is acquired by (27) and (30). From Theorem 1, if Q i = γ 2 I , it is concluded that (14) and (35) are equivalent. The rest is similar to the proof of (16)–(21), which is obtained by conditions (9), (10), and (28). This completes the proof. □
Remark 4.
Compared with the literature [42,43,44], controller (22) combines two traditional controllers, u ( t ) = K x ( t ) and u ( t ) = K τ ( t ) x ( t τ ) , and therefore is more general and has broader applications, such as networked control systems [45].
With the idea behind controller (22), another stabilizing controller without a Bernoulli variable is devised
u ( t ) = K τ ( σ t ) x ( t τ ) + K ( σ t ) x ( t ) .
Using controller (38) in system (1), which includes the Bernoulli variable, one obtains
d x ( t ) = [ S ( σ t ) x ( t ) + S τ ( σ t ) x ( t τ ) + G ( σ t ) v ( t ) + δ ( t ) L ( σ t ) u ( t ) ] d t + [ U ( σ t ) x ( t ) + U τ ( σ t ) x ( t τ ) + ( 1 δ ( t ) ) J ( σ t ) u ( t ) + F ( σ t ) v ( t ) ] d ω ( t ) , z ( t ) = H ( σ t ) x ( t ) + H τ ( σ t ) x ( t τ ) + D ( σ t ) v ( t ) , t [ 0 , T ˜ ] , x ( t ) = ψ ( t ) , σ t = t 0 , t [ τ , 0 ] ,
which is rewritten as follows
d x ( t ) = [ S ^ ( σ t ) x ( t ) + S ¯ τ ( σ t ) x ( t τ ) + G ( σ t ) v ( t ) + ( δ ( t ) δ ) W ¯ ( σ t ) ] d t + [ U ¯ ( σ t ) x ( t ) + U ^ τ ( σ t ) x ( t τ ) + F ( σ t ) v ( t ) + ( δ ( t ) δ ) Z ¯ ( σ t ) ] d ω ( t ) , z ( t ) = H ( σ t ) x ( t ) + H τ ( σ t ) x ( t τ ) + D ( σ t ) v ( t ) , t [ 0 , T ˜ ] , x ( t ) = ψ ( t ) , σ t = t 0 , t [ τ , 0 ] ,
where
S ¯ τ ( σ t ) = S τ ( σ t ) + δ L ( σ t ) K τ ( σ t ) , W ¯ ( σ t ) = L ( σ t ) K ( σ t ) x ( t ) + L ( σ t ) K τ ( σ t ) x ( t τ ) , U ¯ ( σ t ) = U ( σ t ) + ( 1 α ) J ( σ t ) K ( σ t ) , Z ¯ ( σ t ) = J ( σ t ) K ( σ t ) x ( t ) + J ( σ t ) K τ ( σ t ) x ( t τ ) .
The following theorem is developed, which is a sufficient condition of H FTB for the closed-loop system (39).
Theorem 3.
System (39) is H FTB with respect to ( c 1 , c 2 , T ˜ , R i , d , γ ) , if for a real scalar η 0 , there exist the constants γ > 0 , λ i 1 > 0 , λ i 2 > 0 , the symmetric matrix X i > 0 , the matrices O ¯ i > 0 and Y i , Y τ i satisfying (25)–(28), and
Ψ ˜ i 1 Ψ ^ i 2 G i X i H i T Ψ ^ i 3 Ψ ˜ i 4 X i * Ψ ˜ i 5 0 H τ i T Ψ ˜ i 6 Ψ ^ i 7 0 * * γ 2 I D i F i T 0 0 * * * I 0 0 0 * * * * X i 0 0 * * * * * X i 0 * * * * * * O ¯ i < 0 ,
where
Ψ ^ i 2 = S τ i X i + δ L i Y τ i , Ψ ^ i 7 = β Y τ i T J i T , Ψ ^ i 3 = X i U i T + ( 1 δ ) Y i T J i T .
The gains of controller (38) are presented by
K i = Y i X i 1 , K τ i = Y τ i X i 1 .
Proof. 
Choosing the Lyapunov functional (12) for system (39), then L V ( x t , σ t = i ) satisfies
L V ( x t , σ t = i ) = x T ( t ) [ ( P i S ^ i ) 🟉 + j = 1 N π i j P j ] x ( t ) + 2 x T ( t ) P i S ¯ τ i x ( t τ ) + 2 x T ( t ) P i G i v ( t ) + β 2 Z ¯ i T P i Z ¯ i + x T ( t ) O i x ( t ) + Ξ ^ T P i Ξ ^ x T ( t τ ) O i x ( t τ ) = x T ( t ) [ ( 1 + π k i ) ( P i S ^ i ) 🟉 + ζ k i + j L u k i π i j ( ( P i S ^ i ) 🟉 + P j ) + O i ] x ( t ) + 2 x T ( t ) P i S ¯ τ i x ( t τ ) + β 2 Z ¯ i T P i Z ¯ i + Ξ ^ T P i Ξ ^ + 2 x T ( t ) P i G i v ( t ) x T ( t τ ) O i x ( t τ ) ,
where Ξ ^ = F i v ( t ) + U ˜ i x ( t ) + U ^ τ i x ( t τ ) .
The next steps are the same as those for the proof of Theorem 2. Pre- and post-multiply (40) by diag { X i 1 , X i 1 , I , I , , I } and its transpose, respectively. Then, by Schur’s complement and pre- and post-multiplying both sides by [ x T ( t ) x T ( t τ ) v T ( t ) z T ( t ) ] and its transpose, respectively, and, by comparing it with (41), one obtains
L V ( x t , σ t = i ) < η V ( x t , σ t = i ) + γ 2 v T ( t ) v ( t ) z T ( t ) z ( t ) .
The following step is similar to Theorem 2 and is omitted here. The proof ends. □
Remark 5.
If K τ ( σ t ) = 0 , then Theorem 3 is reduced to Theorem 3.3 in [51].
For system (1), another controller experiencing a disordering phenomenon is described as
u ( t ) = [ ( 1 δ ( t ) ) K τ ( σ t ) + δ ( t ) K ( σ t ) ] x ( t ) + [ ( 1 δ ( t ) ) K ( σ t ) + δ ( t ) K τ ( σ t ) ] x ( t τ ) ,
which implies
u ( t ) = K τ ( σ t ) x ( t τ ) + K ( σ t ) x ( t ) , if   δ ( t ) = 1   or   without   disordering , K ( σ t ) x ( t τ ) + K τ ( σ t ) x ( t ) , if   δ ( t ) = 0   or   with   disordering .
It is easy to see that (42) is the same as
u ( t ) = [ ( 1 δ ) K τ ( σ t ) + δ K ( σ t ) + ( δ ( t ) δ ) ( K ( σ t ) K τ ( σ t ) ) ] x ( t ) + [ ( 1 δ ) K ( σ t ) + δ K τ ( σ t ) + ( δ ( t ) δ ) ( K τ ( σ t ) K ( σ t ) ) ] x ( t τ ) .
Controller (43) is applied to system (1), and let δ t = δ ( t ) δ . Then, we have
d x ( t ) = [ S ˜ ( σ t ) x ( t ) + + S ˜ τ ( σ t ) x ( t τ ) + G v ( t ) + δ t L ( σ t ) ( K ( σ t ) K τ ( σ t ) ) x ( t ) + δ t L ( σ t ) ( K τ ( σ t ) K ( σ t ) ) x ( t τ ) ] d t + [ U ˜ ( σ t ) x ( t ) + F v ( t ) + U ˜ τ ( σ t ) x ( t τ ) + δ t J ( σ t ) ( K ( σ t ) K τ ( σ t ) ) x ( t ) + δ t J ( σ t ) ( K τ ( σ t ) K ( σ t ) ) x ( t τ ) ] d ω , z ( t ) = H ( σ t ) x ( t ) + H τ ( σ t ) x ( t τ ) + D ( σ t ) v ( t ) , t [ 0 , T ˜ ] , x ( t ) = ψ ( t ) , σ t = t 0 , t [ τ , 0 ] ,
where
S ˜ ( σ t ) = S ( σ t ) + L ( σ t ) [ δ K ( σ t ) + ( 1 δ ) K τ ( σ t ) ] , S ˜ τ ( σ t ) = S τ ( σ t ) + L ( σ t ) [ δ K τ ( σ t ) + ( 1 δ ) K ( σ t ) ] , U ˜ ( σ t ) = U ( σ t ) + J ( σ t ) [ δ K ( σ t ) + ( 1 δ ) K τ ( σ t ) ] , U ˜ τ ( σ t ) = U τ ( σ t ) + J ( σ t ) [ δ K τ ( σ t ) + ( 1 δ ) K ( σ t ) ] .
Then, the following theorem is developed.
Theorem 4.
For the given real scalar η 0 , system (44) is H FTB with respect to ( c 1 , c 2 , T ˜ , R i , d , γ ) , if there exist γ > 0 , λ i 1 > 0 , λ i 2 > 0 , the matrices X i > 0 , O ¯ i > 0 and Y i , Y τ i satisfying (25)–(28), and
Ψ ˘ i 1 Ψ ˘ i 2 G i X i H i T Ψ ˘ i 3 Ψ ˘ i 4 X i * Ψ ˜ i 5 0 H τ i T Ψ ˘ i 6 Ψ ˘ i 7 0 * * γ 2 I D i F i T 0 0 * * * I 0 0 0 * * * * X i 0 0 * * * * * X i 0 * * * * * * O ¯ i < 0 .
where
Ψ ˘ i 1 = ( 1 + X i π k i ) ( S i X i + δ L i Y i + ( 1 δ ) L i Y τ i ) 🟉 + X i ζ k i X i T η X i + X i j L u k i π i j [ ( S i X i + δ L i Y i + ( 1 δ ) L i Y τ i ) 🟉 + X j 1 X i T ] , Ψ ˘ i 2 = S τ i X i + δ L i Y τ i + ( 1 δ ) L i Y i , Ψ ˘ i 3 = X i U i T + δ Y i T J i T + ( 1 δ ) Y τ i T J i T , Ψ ˘ i 4 = β ( Y i T J i T Y τ i T J i T ) , Ψ ˘ i 7 = β ( Y τ i T J i T Y i T J i T ) , Ψ ˘ i 6 = X i U τ i T + δ Y τ i T J i T + ( 1 δ ) Y i T J i T .
Then, the gains of controller (42) are obtained by
K i = Y i X i 1 , K τ i = Y τ i X i 1 .
Proof. 
Choosing the Lyapunov functional (12) for system (44), it is obtained that
L V ( x t , σ t = i ) = x T ( t ) [ ( P i S ˜ i ) 🟉 + j = 1 N π i j P j ] x ( t ) + β 2 Z ˘ i T P i Z ˘ i + 2 x T ( t ) P i G i v ( t ) + 2 x T ( t ) P i S ˜ τ i x ( t τ ) + x T ( t ) O i x ( t ) + Ξ ˘ T P i Ξ ˘ x T ( t τ ) O i x ( t τ ) = x T ( t ) [ ( 1 + π k i ) ( P i S ˜ i ) 🟉 + ζ k i + P j + O i ] x ( t ) + 2 x T ( t ) P i S ˜ τ i x ( t τ ) + 2 x T ( t ) P i G i v ( t ) + β 2 Z ˘ i T P i Z ˘ i + Ξ ˘ T P i Ξ ˘ x T ( t τ ) O i x ( t τ ) .
where
Ξ ˘ = U ˜ i x ( t ) + U ˜ τ i x ( t τ ) + F i v ( t ) , Z ˘ i = J i ( K i K τ i ) x ( t ) + J i ( K τ i K i ) x ( t τ ) .
Pre- and post-multiply (45), respectively, by diag { X i 1 , X i 1 , I , I , , I } and its transpose. Then, from Lemma 2, by pre- and post-multiplying both sides by [ x T ( t ) x T ( t τ ) v T ( t ) z T ( t ) ] and its transpose, respectively, and comparing it with (46), one obtains
L V ( x t , σ t = i ) < η V ( x t , σ t = i ) + γ 2 v T ( t ) v ( t ) z T ( t ) z ( t ) .
The next steps are same as Theorem 2 and are omitted here. The proof is complete. □
Remark 6.
If δ ( t ) = 1 or controller (42) does not experience a disordering phenomenon, then Theorem 4 is reduced to Theorem 3.

4. Numerical Examples

In this part, three examples are given to illustrate the effectiveness of the proposed results.
Example 1.
Consider system (1) with the following parameters:
Mode1:
S 1 = 3.1 0.3 1 0.1 , S τ 1 = 1.7 1.1 0 0.2 , U 1 = 0.61 0.13 0.17 0.15 , U τ 1 = 0.2 0.1 0 0.1 ,
G 1 = 0.1 1.3 0.2 0.9 , F 1 = 0.2 0.7 1.9 0 , L 1 = 2.1 0.9 , J 1 = 1.1 0.5 , R 1 = 1 0 0 1 ,
     H 1 = 0.4 0.1 0.7 0.1 , H τ 1 = 0.1 0.2 0.2 0.3 , D 1 = 0.2 0.5 0.3 0.1 .
Mode2:
S 2 = 3.9 0.9 1.1 0 , S τ 2 = 0.7 1.2 0 0.3 , U 2 = 0.5 0.3 0.1 0.3 , U τ 2 = 0.2 0.3 0.5 0.1 ,
G 2 = 0.3 0.9 0.7 1.1 , F 2 = 0.1 1 0.2 0.1 , L 2 = 1.6 1.5 , J 2 = 1.3 0.4 , R 2 = 1 0 0 1 ,
     H 2 = 0.3 0.1 0.9 0.3 , H τ 2 = 0.1 0.2 0.1 0.3 , D 2 = 0.2 0.6 0.2 0.1 .
The partially unknown transition rate matrix is
Π = 0.5 0.5 ? ? .
Moreover, T ˜ = 10 , c 1 = 0.5 , τ = 1 , δ = 0.6 , d = 1 , x 0 = [ 0.1 0.05 ] T , and v ( t ) = 1 ( 1   +   t 2 ) . From Theorem 2, the feasible solution can be found when η [ 0 , 1.90 ] . The relationship curves of η with c 2 and γ are shown in Figure 1 and Figure 2, respectively. From Figure 1, it is seen that the minimum value of c 2 is 32.3726 and the corresponding γ = 2.8132 when η = 0.05 .
When η = 0.05 , the gains of controller (22) are
K 1 = 1.8478 1.1423 , K τ 1 = 0.1438 0.2441 ,
K 2 = 1.1312 1.4924 , K τ 2 = 0.0100 0.2335 .
This indicates that under controller (22), when E [ x T ( t 1 ) R i x ( t 1 ) ] 0.5 , t 1 [ 1 , 0 ] , then E [ x T ( t 2 ) R i x ( t 2 ) ] < 32.3726 , t 2 [ 0 , 1 ] , and E [ 0 1 z T ( t ) z ( t ) d t ] < 2 . 8132 2 E [ 0 1 v T ( t ) v ( t ) d t ] .
According to the conditions mentioned above, Figure 3 shows the state response of system (23), where the small figures represent the curves of a possible Markovian mode evolution and the evolution of the Bernoulli variable δ ( t ) with δ = 0.6 . The evolution of E [ x T ( t ) R x ( t ) ] is shown in Figure 4, which implies that the closed-loop system (23) is H FTB.
In order to show the advantages of Theorem 2 and the influence of the probability δ , Figure 5 depicts the relationship between c 2 and δ . It is seen that c 2 takes the minimum value when δ = 0.78 . This means controller (22) has less conservatism.
Example 2.
Consider system (39) with the parameters of Example 1. By Theorem 3, we obtain the feasible solution when η [ 0 , 1.90 ] . The minimum value of c 2 is 67.8162 and the corresponding γ = 5.0148 when η = 0 . The gains of controller (38) are
K 1 = 2.1007 1.3952 , K τ 1 = 0.3184 0.1772 ,
K 2 = 0.2674 1.8616 , K τ 2 = 0.1328 0.4017 .
Figure 6 and Figure 7 show the state response of system (39) and the evolution of E [ x T ( t ) R x ( t ) ] , respectively. From these figures, it is seen that the closed-loop system (39) is H FTB by the designed controller (38). This implies that Theorem 3 is valid.
Example 3.
Consider system (44) with the system parameters of Example 1. By Theorem 4, the feasible region is η [ 0 , 1.89 ] . When η = 0.03 , the minimum value of c 2 is 310.9813 , and the corresponding γ = 9.7316 . The gains of controller (42) are
K 1 = 0.5970 0.3385 , K τ 1 = 0.8791 0.2409 ,
K 2 = 0.6966 0.3475 , K τ 2 = 0.6689 0.2812 .
Similar to Example 2, the state response of system (44) is shown in Figure 8, and the evolution of E [ x T ( t ) R x ( t ) ] is drawn in Figure 9. It is concluded from these plots that the closed-loop system (44) is H FTB, by the designed controller (42). Therefore, Theorem 4 is valid.

5. Conclusions

In this paper, the FTB and H FTB problems of time-delay Markovian jump systems with a partially unknown transition rate have been studied. A sufficient condition of FTB for the given system is obtained by the LMIs technique and the Lyapunov functional method. A new controller that is partially time delay-dependent is designed. This controller has the advantages of strong generality and less conservative property. Based on PDDCs, two new kinds of controllers are derived; one does not contain the Bernoulli variable, and the other describes controllers experiencing a disordering phenomenon. Combined with LMIs, some sufficient conditions of H FTB for closed-loop systems are given via the designed controllers. Three numerical examples illustrate that the proposed methods are effective. The results in this paper can be extended to the H filtering problem for Markovian jump systems with time-varying delays. In the future, the FTB and H FTB problems of fractional systems will be considered by means of the theories of fractional calculus and negative probabilities [56].

Author Contributions

Conceptualization, Y.L.; Data curation, X.G.; Funding acquisition, X.L.; Methodology, X.G.; Writing-original draft, W.L.; Writing-review and editing, X.L. and Y.L. All authors read and agreed to the published version of the manuscript.

Funding

This work was supported by the National Natural Science Foundation of China (No. 61972236) and the Shandong Provincial Natural Science Foundation (No. ZR2022MF233).

Institutional Review Board Statement

Not applicable.

Data Availability Statement

Not applicable.

Acknowledgments

The authors are grateful to the referees for their constructive suggestions.

Conflicts of Interest

The authors declare no conflict of interest.

Appendix A

In this section, Lemma 1 is reformulated with sharp inequalities and also proved.
Let g ( t ) be a nonnegative continuous function. If there are positive constants r , q such that
g ( t ) < r + q 0 t g ( s ) d s , 0 t T ˜ ,
then
g ( t ) < r exp ( q t ) , 0 t T ˜ .
Proof. 
Let
U ( t ) = r + q 0 t g ( s ) d s , 0 t T ˜ .
Then, the derivative of U ( t ) is U ˙ ( t ) = q g ( t ) . From (A1), we have U ˙ ( t ) < q U ( t ) , i.e., U ˙ ( t ) q U ( t ) < 0 . Then, it is deduced that
U ˙ ( t ) exp ( q t ) q U ( t ) exp ( q t ) < 0 ,
which implies that the derivative of U ( t ) exp ( q t ) satisfies ( U ( t ) exp ( q t ) ) < 0 . From the monotonicity, U ( t ) exp ( q t ) < U ( 0 ) = r , which guarantees that U ( t ) < r exp ( q t ) . Together with (A1) and (A3), (A2) is obtained. □

References

  1. Langton, R. Stability and Control of Aircraft Systems: Introduction to Classical Feedback Control; John Wiley and Sons, Ltd.: Hoboken, NJ, USA, 2006. [Google Scholar] [CrossRef]
  2. Liu, G.; Ma, L.; Liu, J. Handbook of Chemistry and Chemical Material Property Data; Chemical Industry Press: Beijing, China, 2002. [Google Scholar]
  3. Zivanovic, M.; Vukobratovic, M. Multi-Arm Cooperating Robots: Dynamics and Control, 1st ed.; Spring: Berlin/Heidelberg, Germany, 2006. [Google Scholar]
  4. Fridman, E.; Shaked, U. Parameter dependent stability and stabilization of uncertain time-delay systems. IEEE Trans. Autom. Control. 2003, 48, 861–866. [Google Scholar] [CrossRef]
  5. Yakoubi, K.; Chitour, Y. Linear systems subject to input saturation and time delay: Global asymptotic stabilization. IEEE Trans. Autom. Control. 2007, 52, 874–879. [Google Scholar] [CrossRef]
  6. Hai, L.; Antsaklis, P. Stability and stabilizability of switched linear systems: A survey of recent results. IEEE Trans. Autom. Control 2009, 54, 308–322. [Google Scholar] [CrossRef]
  7. Mancilla-Aguilar, J.; Haimovich, H.; Garcia, R. Global stability results for switched systems based on weak Lyapunov functions. IEEE Trans. Autom. Control 2017, 62, 2764–2777. [Google Scholar] [CrossRef]
  8. Dorato, P. Short time stability in linear time-varying systems. Proc. IRE Int. Conv. Rec. 1961, 4, 83–87. [Google Scholar]
  9. Kamenkov, G. On stability of motion over a fnite interval of time. J. Appl. Math. Mech. 1953, 17, 529–540. [Google Scholar]
  10. Amato, F.; Ariola, M.; Dorato, P. Finite-time control of linear systems subject to parametric uncertainties and disturbances. Automatica 2001, 37, 1459–1463. [Google Scholar] [CrossRef]
  11. Amato, F.; Ariola, M.; Cosentino, C. Finite-time stabilization via dynamic output feedback. Automatica 2005, 42, 337–342. [Google Scholar] [CrossRef]
  12. Amato, F.; Ariola, M.; Cosentino, C.; De Tommasi, G. Input-output finite time stabilization of linear systems. Automatica 2010, 46, 1558–1562. [Google Scholar] [CrossRef]
  13. Amato, F.; Ambrosino, R.; Ariola, M.; De Tommasi, G.; Pironti, A. On the finite-time boundedness of linear systems. Automatica 2019, 107, 454–466. [Google Scholar] [CrossRef]
  14. Liu, X.; Liu, Q.; Li, Y. Finite-time guaranteed cost control for uncertain mean-field stochastic systems. J. Frankl. Inst. 2020, 357, 2813–2829. [Google Scholar] [CrossRef]
  15. Tartaglione, G.; Ariola, M.; Cosention, C.; De Tommasi, G.; Pironti, A.; Amato, F. Annular finite-time stability analysis and synthesis of stochastic linear time-varying systems. Int. J. Control. 2019, 94, 2252–2263. [Google Scholar] [CrossRef]
  16. Tartaglione, G.; Ariola, M.; Amato, F. Conditions for annular finite-time stability of Itô stochastic linear time-varying syetems with Markov switching. IET Control. Theory Appl. 2019, 14, 626–633. [Google Scholar] [CrossRef]
  17. Bai, Y.; Sun, H.; Wu, A. Finite-time stability and stabilization of Markovian jump linear systems subject to incomplete transition descriptions. Int. J. Control. Autom. Syst. 2021, 19, 2999–3012. [Google Scholar] [CrossRef]
  18. Gu, Y.; Shen, M.; Ren, Y.; Liu, H. H finite-time control of unknown uncertain systems with actuator failure. Appl. Math. Comput. 2020, 383, 125375. [Google Scholar] [CrossRef]
  19. Liu, X.; Wang, J.; Li, Y. Finite-time stability analysis fordiscrete-time stochastic nonlinear systems with time-varying delay. J. Shandong Univ. Sci. Technol. (Nat. Sci.) 2021, 40, 110–120. [Google Scholar] [CrossRef]
  20. Shen, H.; Li, F.; Yan, H.; Zhang, C.; Yan, H. Finite-time event-triggered H control for T-S fuzzy Markov systems. IEEE Trans. Fuzzy Syst. 2018, 26, 3122–3135. [Google Scholar] [CrossRef] [Green Version]
  21. Liu, H.; Shen, Y. Finite-time bounded stabilisation for linear systems with finite-time H2-gain constraint. IET Control. Theory Appl. 2020, 14, 1266–1275. [Google Scholar] [CrossRef]
  22. Li, M.; Sun, L.; Yang, R. Finite-time H control for a class of discrete-time nonlinear singular systems. J. Frankl. Inst. 2018, 355, 5384–5393. [Google Scholar] [CrossRef]
  23. Xiang, Z.; Qiao, C.; Mahmoud, M. Finite-time analysis and H control for switched stochastic systems. J. Frankl. Inst. 2011, 349, 915–927. [Google Scholar] [CrossRef]
  24. Zhuang, J.; Liu, X.; Li, Y. Event-triggered annular finite-time H filtering for stochastic network systems. J. Frankl. Inst. 2022, 359, 11208–11228. [Google Scholar] [CrossRef]
  25. De la Sen, M.; Ibeas, A.; Nistal, R. On the entropy of events under eventually global inflated or deflated probability constraints. Application to the supervision of epidemic models under vaccination controls. Entropy 2020, 22, 284. [Google Scholar] [CrossRef] [PubMed] [Green Version]
  26. Li, Y.; Zhang, W.; Liu, X. H_index for discrete-time stochastic systems with Markovian jump and multiplicative noise. Automatica 2018, 90, 286–293. [Google Scholar] [CrossRef]
  27. Liu, X.; Zhang, W.; Li, Y. H_index for continuous-time stochastic systems with Markov jump and multiplicative noise. Automatica 2019, 105, 167–178. [Google Scholar] [CrossRef]
  28. Zhuang, J.; Liu, X.; Li, Y. H_index for Itô stochastic systems with Possion jump. J. Frankl. Inst. 2021, 358, 9929–9950. [Google Scholar] [CrossRef]
  29. Wang, G.; Liu, L.; Zhang, Q.; Yang, C. Finite-time stability and stabilization of stochastic delayed jump systems via general controllers. J. Frankl. Inst. 2016, 354, 938–966. [Google Scholar] [CrossRef]
  30. Yan, Z.; Ju, H.; Zhang, W. Finite-time guaranteed cost control for Itô stochastic Markovian jump systems with incomplete transition rates. Int. J. Robust Nonlinear Control. 2016, 27, 66–83. [Google Scholar] [CrossRef]
  31. Yan, G.; Zhang, W.; Zhang, G. Finite-time stability and stabilization of Itô stochastic systems with Markovian switching: Mode-dependent parameter approach. IEEE Trans. Autom. Control 2015, 60, 2428–2433. [Google Scholar] [CrossRef]
  32. Liu, X.; Li, W.; Yao, C.; Li, Y. Finite-time guaranteed cost control for Markovian jump systems with time-varying delays. Mathematics 2022, 10, 2028. [Google Scholar] [CrossRef]
  33. Liu, Z.; Miao, S.; Fan, Z.; Han, J. Markovian switching model and non-linear DC modulation control of AC/DC power system. IET Gener. Transm. Distrib. 2017, 11, 2654–2663. [Google Scholar] [CrossRef]
  34. Liu, X.; Zhuang, J.; Li, Y. H filtering for Markovian jump linear systems with uncertain transition probabilities. Int. J. Control. Autom. Syst. 2021, 19, 2500–2510. [Google Scholar] [CrossRef]
  35. Zhang, L.; Guo, G. Control of a group of systems whose communication channels are assigned by a semi-Markov process. Int. J. Syst. Sci. 2019, 50, 2306–2315. [Google Scholar] [CrossRef]
  36. Qi, W.; Gao, X. Finite-time H control for stochastic time-delayed Markovian switching systems with partly known transition rates and nonlinearity. Int. J. Syst. Sci. 2015, 47, 500–508. [Google Scholar] [CrossRef]
  37. Cheng, J.; Zhu, H.; Zhong, S. Finite-time H filtering for discrete-time Markovian jump systems with partly unknown transition probabilities. J. Frankl. Inst. 2013, 91, 1020–1028. [Google Scholar] [CrossRef]
  38. Wu, Z.; Yang, L.; Jiang, B. Finite-time H control of stochastic singular systems with partly known transition rates via an optimization algorithm. Int. J. Control. Autom. Syst. 2019, 17, 1462–1472. [Google Scholar] [CrossRef]
  39. Jiang, W.; Liu, K.; Charalambous, T. Multi-agent consensus with heterogenous time-varying input and communication delays in digraphs. Automatica 2022, 135, 109950. [Google Scholar] [CrossRef]
  40. Pourdehi, S.; Kavimaghaee, P. Stability analysis and design of model predictive reset control for nonlinear time-delay systems with application to a two-stage chemical reactor system. J. Process Control 2018, 71, 103–115. [Google Scholar] [CrossRef]
  41. Truong, D. Using causal machine learning for predicting the risk of flight delays in air transportation. J. Air Transp. Manag. 2021, 91, 101993. [Google Scholar] [CrossRef]
  42. Li, X.; Li, P. Stability of time-delay systems with impulsive control involving stabilizing delays. Automatica 2020, 124, 109336. [Google Scholar] [CrossRef]
  43. Li, L.; Zhang, H.; Wang, Y. Stabilization and optimal control of discrete-time systems with multiplicative noise and multiple input delays. Syst. Control. Lett. 2021, 147, 104833. [Google Scholar] [CrossRef]
  44. Zong, G.; Wang, R.; Zheng, W. Finite-time H control for discrete-time switched nonlinear systems with time delay. Int. J. Robust Nonlinear Control. 2015, 25, 914–936. [Google Scholar] [CrossRef]
  45. Qiu, L.; He, L.; Dai, L.; Fang, C.; Chen, Z.; Pan, J.; Zhang, B.; Xu, Y.; Chen, C.R. Networked control strategy of dual linear switched reluctance motors based time delay tracking system. ISA Trans. 2021, 129, 605–615. [Google Scholar] [CrossRef] [PubMed]
  46. Liu, X.; Liu, W.; Li, Y. Finite-time H control of stochastic time-delay Markovian jump systems. J. Shandong Univ. Sci. Technol. (Natural Sci.) 2022, 41, 75–84. [Google Scholar] [CrossRef]
  47. Yan, Z.; Song, Y.; Park, J. Finite-time stability and stabilization for stochastic Markov jump systems with mode-dependent time delays. ISA Trans. 2017, 68, 141–149. [Google Scholar] [CrossRef]
  48. Liu, L.; Zhang, X.; Zhao, X.; Yang, B. Stochastic finite-time stabilization for discrete-time positive Markov jump time-delay systems. J. Frankl. Inst. 2022, 359, 84–103. [Google Scholar] [CrossRef]
  49. Wang, G.; Li, Z.; Zhang, Q.; Yang, C. Robust finite-time stability and stabilization of uncertain Markovian jump systems with time-varying delay. Appl. Math. Comput. 2017, 293, 377–393. [Google Scholar] [CrossRef]
  50. Liu, X.; Li, W.; Wang, J.; Li, Y. Robust finite-time stability for uncertain discrete-time stochastic nonlinear systems with time-varying delay. Entropy 2022, 24, 828. [Google Scholar] [CrossRef]
  51. Tian, G. Finite-time H control for stochastic Markovian jump systems with time-varying delay and generally uncertain transition rates. Int. J. Syst. Sci. 2021, 52, 1–14. [Google Scholar] [CrossRef]
  52. Oksendal, B. Stochastic Differential Equations: An Introduction with Applications, 6th ed.; Springer: New York, NY, USA, 2006. [Google Scholar] [CrossRef]
  53. Bellman, R.; Cooke, K. Differential-Difference Equations; Academic Press: New York, NY, USA, 1963. [Google Scholar]
  54. Ouellette, D. Schur Complements and Statistics. Linear Algebra Appl. 1981, 36, 187–295. [Google Scholar] [CrossRef] [Green Version]
  55. Vrabel, R. On local asymptotic stabilization of the nonlinear systems with time-varying perturbations by state-feedback control. Int. J. Gen. Syst. 2018, 48, 80–89. [Google Scholar] [CrossRef] [Green Version]
  56. Tenreiro Machado, J. Fractional derivatives and negative probabilities. Commun. Nonlinear Sci. Numer. Simul. 2019, 79, 104913. [Google Scholar] [CrossRef]
Figure 1. When η [ 0 , 1.90 ] , the curve of c 2 .
Figure 1. When η [ 0 , 1.90 ] , the curve of c 2 .
Entropy 25 00402 g001
Figure 2. When η [ 0 , 1.90 ] , the curve of γ .
Figure 2. When η [ 0 , 1.90 ] , the curve of γ .
Entropy 25 00402 g002
Figure 3. The state response of system (23).
Figure 3. The state response of system (23).
Entropy 25 00402 g003
Figure 4. The evolution of E [ x T ( t ) R x ( t ) ] for system (23).
Figure 4. The evolution of E [ x T ( t ) R x ( t ) ] for system (23).
Entropy 25 00402 g004
Figure 5. The relationship between δ and c 2 .
Figure 5. The relationship between δ and c 2 .
Entropy 25 00402 g005
Figure 6. The state response of system (39).
Figure 6. The state response of system (39).
Entropy 25 00402 g006
Figure 7. The evolution of E [ x T ( t ) R x ( t ) ] for system (39).
Figure 7. The evolution of E [ x T ( t ) R x ( t ) ] for system (39).
Entropy 25 00402 g007
Figure 8. The state response of system (44).
Figure 8. The state response of system (44).
Entropy 25 00402 g008
Figure 9. The evolution of E [ x T ( t ) R x ( t ) ] for system (44).
Figure 9. The evolution of E [ x T ( t ) R x ( t ) ] for system (44).
Entropy 25 00402 g009
Disclaimer/Publisher’s Note: The statements, opinions and data contained in all publications are solely those of the individual author(s) and contributor(s) and not of MDPI and/or the editor(s). MDPI and/or the editor(s) disclaim responsibility for any injury to people or property resulting from any ideas, methods, instructions or products referred to in the content.

Share and Cite

MDPI and ACS Style

Liu, X.; Guo, X.; Liu, W.; Li, Y. Finite-Time H Control for Time-Delay Markovian Jump Systems with Partially Unknown Transition Rate via General Controllers. Entropy 2023, 25, 402. https://doi.org/10.3390/e25030402

AMA Style

Liu X, Guo X, Liu W, Li Y. Finite-Time H Control for Time-Delay Markovian Jump Systems with Partially Unknown Transition Rate via General Controllers. Entropy. 2023; 25(3):402. https://doi.org/10.3390/e25030402

Chicago/Turabian Style

Liu, Xikui, Xinye Guo, Wencheng Liu, and Yan Li. 2023. "Finite-Time H Control for Time-Delay Markovian Jump Systems with Partially Unknown Transition Rate via General Controllers" Entropy 25, no. 3: 402. https://doi.org/10.3390/e25030402

Note that from the first issue of 2016, this journal uses article numbers instead of page numbers. See further details here.

Article Metrics

Back to TopTop