Abstract
This paper deals with a class of dynamical systems obtained from interconnecting linear systems with static set-valued relations. We first show that such an interconnection can be described by a differential inclusions with a maximal monotone set-valued mappings when the underlying linear system is passive and the static relation is maximal monotone. Based on the classical results on such differential inclusions, we conclude that such interconnections are well-posed in the sense of existence and uniqueness of solutions. Finally, we investigate conditions which guarantee well-posedness but are weaker than passivity.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
It is a true pleasure for us to contribute an article to this special issue in honor of Jong-Shi Pang on the occasion of his 60th birthday. In the last decade, we had the privilege to develop a fruitful research collaboration with Jong-Shi on the so-called linear complementarity systems, combining notions/tools from systems theory and mathematical programming. This paper builds upon and expands further some of the ideas that came about from our collaboration with Jong-Shi.
Variational inequalities were introduced by Stampacchia in 1964 [1] as a tool in the study of elliptic partial differential equations, and have since been recognized as instrumental in a large class of optimization and equilibrium problems. Applications range from elastoplasticity to traffic and from electrical networks to mathematical finance; see for instance [2, 3]. The role of maximal monotonicity in the context of variational inequalities, as a sufficient condition for well-behavedness, can be compared to the role of convexity in optimization problems. Maximal monotone mappings were introduced in 1961 by Minty [4], who had already earlier applied the notion of monotone relations in an abstract formulation for electrical networks of nonlinear resistors [5]. Extensions to dynamic problems were undertaken in the same decade; intimate connections between semigroups of nonlinear contractions and maximal monotone mappings were established by Crandall and Pazy [6] and further developed by Brézis [7].
The development of the theory of semigroups of nonlinear contractions took place in the classical context of dynamics given by a closed system of (partial) differential equations. Engineers have long appreciated the power of open (input-output) dynamical systems as a device for modeling as well as for analysis. It comes naturally in many applications in the engineering sciences, as well as in biology and economics, to look at a dynamical system as a composite of smaller systems which are connected by the specification of relations between certain variables associated to the subsystems. These variables may be referred to as “inputs” and “outputs”, or more generally as “connecting variables” since the suggestion of unidirectionality that comes with the input/output terminology is not always appropriate. Systems equipped with connecting variables in this sense may be simply referred to as “open dynamical systems”. Early contributions were made in the 1930’s in the field of electrical engineering by among others Nyquist and Bode, and the field has received intensive study ever since the pioneering work of Kalman around 1960 and the associated successes in the Apollo space program and in many other applications.
Within the class of open dynamical systems, linear time-invariant systems play a special role as a prime example and as a first breeding ground of ideas that are later developed in wider contexts. More or less similarly, linear complementarity problems [8] take a special position among variational inequalities. Dynamical systems that arise as interconnections of linear time-invariant systems and linear complementarity problems came under investigation in the 1990’s under the name “linear complementarity systems” [9, 10]. Part of the motivation came from the fact that these systems can be looked at as a particular class of systems with mixed continuous and discrete state variables, also called “multimodal systems” or “hybrid systems”. More generally, differential variational inequalities were studied by Pang and Stewart [11]. Linear time-invariant systems together with static relations described by set-valued mappings have been used extensively. An incomplete inventory includes electrical networks with switching elements as in power converters [12–16], linear relay systems [17, 18], piecewise linear systems [19], and projected dynamical systems [20, 21]; see also [22–24] for further examples and [25, 26] for numerical analysis of maximal monotone differential inclusions.
The history of linear time-invariant systems connected to static (nonlinear) relations in fact goes back a long way. This way of describing a dynamical system has been used intensively as a tool in stability analysis within the context of so-called Lur’e systems; see [27] for a survey. The notion of passivity (also known as dissipativity) plays an important role in this theory. The term is used here as a description of a characteristic of an open dynamical system, and is motivated by the notion of stored energy in electrical networks and in many other applications in physics. The term “dissipativity” is used as well in the context of maximal monotone mappings; in fact, in their paper cited above [6], Crandall and Pazy use the term “dissipative set” in place of “maximal monotone mapping”. This already indicates that there are strong conceptual relations between the notions of passivity and maximal monotonicity. Indeed, passive complementarity systems present themselves as a natural class of dynamical systems [28].
In this paper, our goal is to establish the well-posedness (in the sense of existence and uniqueness of solutions) for systems that arise as interconnections of passive linear time-invariant systems and maximal monotone mappings. Our proof strategy relies on a reduction to the classical case of a closed dynamical system. To achieve this, we present a new result in the spirit of preservation of maximal monotonicity under certain operations. Such results are known to be often nontrivial; even the question whether the sum of two maximal monotone mappings is again maximal monotone does not have a straightforward answer (cf. [29, Section 12.F]). Moreover we provide a “pole-shifting” technique, which is analogous to a well-known method in the classical theory, to extend the results to a larger class of systems. The well-posedness of interconnections of linear passive systems with maximal monotone mappings has been studied before by Brogliato [30]. In the cited paper, well-posedness is proved under some additional conditions, which were later partially removed in [31, 32]. Here we obtain the result without imposing additional conditions.
The paper is organized as follows. In Sect. 2, we quickly review tools from convex analysis and systems theory that will be extensively employed in the paper. The class of systems the paper deals with will be introduced in Sect. 3. This will be followed by the main results in Sect. 4. Finally, the paper closes with the conclusions in Sect. 5.
2 Preliminaries
The following notational conventions will be in force throughout the paper. We denote the set of real numbers by \(\mathbb R\), nonnegative real numbers by \(\mathbb R_+\), n-vectors of real numbers by \(\mathbb R^n\), and \(n\times m\) real-valued matrices by \(\mathbb R^{n\times m}\). The set of locally absolutely continuous, locally integrable, and locally square integrable functions defined from \(\mathbb R_+\) to \(\mathbb R^n\) are denoted, respectively, by \(AC_{\mathrm {loc}}(\mathbb R_+,\mathbb R^n)\), \(L_{1,\mathrm {loc}}(\mathbb R_+,\mathbb R^n)\), and \(L_{2,\mathrm {loc}}(\mathbb R_+,\mathbb R^n)\).
To denote the scalar product of two vectors x, \(y\in \mathbb R^n\), we sometimes use the notation \(\langle x , y \rangle :=x^Ty\) where \(x^T\) denotes the transpose of x. The Euclidean norm of a vector x is denoted by \(\Vert x \Vert :=(x^Tx)^\frac{1}{2}\). For a subspace of \(\mathcal {W}\) of \(\mathbb R^n\), \(\mathcal {W}^\perp \) denotes the orthogonal subspace, that is \(\{y\in \mathbb R^n\mid \langle x , y \rangle =0\text { for all }x\in \mathcal {W}\}\).
We say that a (not necessarily symmetric) matrix \(M\in \mathbb R^{n\times n}\) is positive semi-definite if \(x^TMx\geqslant 0\) for all \(x\in \mathbb R^n\). We sometimes write \(M\geqslant 0\) meaning that M is positive semi-definite. Also, we say that M is positive definite if it is positive semi-definite and \(x^TMx=0\) implies that \(x=0\).
2.1 Convex sets
To a large extent, we follow the notation of the book [29] in the context of convex analysis. We quickly recall concepts/notation which are often employed throughout the paper.
Let \(S\subseteq \mathbb R^n\) be a set. We denote its closure, interior, and relative interior by \({{\mathrm{cl}}}(S)\), \({{\mathrm{int}}}(S)\), \({{\mathrm{rint}}}(S)\), respectively. Its horizon cone \(S_{\infty }\) is defined by \(S_{\infty }:=\{x\mid \exists \,x^\nu \in S,\,\,\lambda ^\nu \downarrow 0\quad \text {such that}\quad \lambda ^\nu x^\nu \rightarrow x\}\). When S is convex, \(N_S(x)\) denotes the normal cone to S at x. For a linear map \(L:\mathbb R^m\rightarrow \mathbb R^n\), we denote its kernel and image by \(\ker L\) and \({{\mathrm{im}}}L\), respectively. By \(L^{-1}(S)\), we denote the inverse image of the set S under L.
For the sake of completeness, we collect some well-known facts on convex sets in the following proposition.
Proposition 1
Let \(X\in \mathbb R^n\) be a convex set. The following statements hold:
-
1.
If X is nonempty then
-
(a)
\({{\mathrm{rint}}}(X)\) is nonempty and convex,
-
(b)
\({{\mathrm{cl}}}({{\mathrm{rint}}}(X))={{\mathrm{cl}}}(X)\) and \({{\mathrm{rint}}}({{\mathrm{cl}}}(X))={{\mathrm{rint}}}(X)\),
-
(c)
\(X_{\infty }\) is a closed convex cone,
-
(d)
\(({{\mathrm{cl}}}(X))_{\infty }=X_{\infty }\).
-
(a)
-
2.
Let \(L:\mathbb R^m\rightarrow \mathbb R^n\) be a linear map. Then,
-
(a)
If \({{\mathrm{rint}}}(X)\,\cap \,{{\mathrm{im}}}(L)\not =\varnothing \) then \(L^{-1}({{\mathrm{rint}}}(X))={{\mathrm{rint}}}(L^{-1}(X))\) and \(L^{-1}({{\mathrm{cl}}}(X))={{\mathrm{cl}}}(L^{-1}(X))\).
-
(b)
\(L(X_\infty )\subseteq (LX)_\infty \) and \(L(X_\infty )=(LX)_\infty \) whenever X is closed and \(\ker L\cap X_\infty =\{0\}\).
-
(c)
If X is closed with \(L^{-1}(X)\not =\varnothing \) then \(N_{L^{-1}(X)}(x)=L^TN_{X}(Lx)\) for all \(x\in L^{-1}(X)\).
-
(a)
2.2 Maximal monotone set-valued mappings
Let \(F:\mathbb R^n\rightrightarrows \mathbb R^n\) be a set-valued mapping, that is \(F(x)\subseteq \mathbb R^n\) for each \(x\in \mathbb R^n\). We define its domain, image, and graph, respectively, as follows:
The inverse mapping \(F^{-1}:\mathbb R^n\rightrightarrows \mathbb R^n\) is defined by \(F^{-1}(y)=\{x\mid y\in F(x)\}\).
Throughout the paper, we are interested in the so-called maximal monotone set-valued mappings. A set valued-mapping \(F:\mathbb R^m\rightrightarrows \mathbb R^m\) is said to be monotone if
for all \((x_i,y_i)\in {{\mathrm{graph}}}(F)\). It is said to be maximal monotone if no enlargement of its graph is possible in \(\mathbb R^n\times \mathbb R^n\) without destroying monotonicity. We refer to [7] and [29] for detailed treatment of maximal monotone mappings.
A particular class of maximal monotone mappings is formed by the subgradient mappings associated with (possibly discontinuous) extended-real valued convex functions. Indeed, it is well-known that the subgradient mapping of a proper, lower semicontinuous convex function is maximal monotone [29, Thm. 12.17]. When \(m=1\), every maximal monotone mapping is such a subgradient mapping [29, Ex. 12.26]. However, not every maximal monotone mapping corresponds to a subgradient mapping in higher dimensions.
Typically, verifying monotonicity is much easier than verifying maximal monotonicity. Among various characterizations of maximal monotonicity (e.g. Minty’s classical theorem [29, Thm. 12.12]), the following will be in use later.
Proposition 2
([33]) A set-valued mapping \(F:\mathbb R^m\rightrightarrows \mathbb R^m\) is maximal monotone if, and only if, it satisfies the following conditions:
-
1.
F is monotone,
-
2.
there exists a convex set \(S_F\) such that \(S_F\subseteq {{\mathrm{dom}}}(F)\subseteq {{\mathrm{cl}}}(S_F)\),
-
3.
\(F(\xi )\) is convex for all \(\xi \in {{\mathrm{dom}}}(F)\),
-
4.
\({{\mathrm{cl}}}({{\mathrm{dom}}}(F))\) is convex and \((F(\xi ))_\infty =N_{{{\mathrm{cl}}}({{\mathrm{dom}}}(F))}(\xi )\) for all \(\xi \in {{\mathrm{dom}}}(F)\),
-
5.
\({{\mathrm{graph}}}(F)\) is closed.
2.3 Differential inclusions
Differential inclusions will play a major role in the rest of the paper. Consider a differential inclusion of the form
where x, \(u\in \mathbb R^n\) and \(F:\mathbb R^n\rightrightarrows \mathbb R^n\) is a set-valued mapping. We say that a function \(x\in AC_{\mathrm {loc}}(\mathbb R_+,\mathbb R^n)\) is a solution of (2) for the initial condition \(x_0\) and a function \(u\in L_{1,\mathrm {loc}}(\mathbb R_+,\mathbb R^m)\) if \(x(0)=x_0\) and (2) is satisfied for almost all \(t\geqslant 0\).
In particular, we are interested in differential inclusions with maximal monotone set-valued mappings. The following theorem summarizes the classical existence and uniqueness results for the solutions of such differential inclusions.
Theorem 1
Consider the differential inclusion
where x, \(u\in \mathbb R^n\) and \(F:\mathbb R^n\rightrightarrows \mathbb R^n\) is a maximal monotone set-valued mapping. For each \(\mu \geqslant 0\), there exists a unique solution of the differential inclusion (3) for the initial condition \(x_0\in {{\mathrm{cl}}}({{\mathrm{dom}}}(F))\) and locally integrable function u.
Proof
If \({{\mathrm{dom}}}(F)=\varnothing \), there is nothing to prove. Suppose that \({{\mathrm{dom}}}(F)\ne \varnothing \). If \({{\mathrm{int}}}({{\mathrm{dom}}}(F))\ne \varnothing \), the assertion follows from [7, Thm. 3.4, Prop. 3.8 and Thm. 3.17]. In case \({{\mathrm{int}}}({{\mathrm{dom}}}(F))=\varnothing \), we employ a dimension-reduction argument inspired by [29, proof of Thm. 12.41]. Let X be the affine hull of \({{\mathrm{dom}}}(F)\). Since X is an affine set, there exist a vector \(\xi \in \mathbb R^n\) and a subspace \(\mathcal {W}\subseteq \mathbb R^n\) such that \(X=\xi +\mathcal {W}\). Let \(T_1\in \mathbb R^{n\times n_1}\) and \(T_2\in \mathbb R^{n\times n_2}\) be matrices such that their columns form bases for \(\mathcal {W}\) and \(\mathcal {W}^\perp \), respectively. One can choose these matrices in such a way that the matrix \(T=\begin{bmatrix}T_1&T_2\end{bmatrix}\) is an orthogonal matrix, that is \(T^TT=I\). Define \(\hat{F}(\hat{x}):=T^TF(T\hat{x}+\xi )\) for all \(\hat{x}\in \mathbb R^n\). Consider the differential inclusion
Note that x is a solution of (3) for the initial condition \(x_0\) and the function u if and only if \(\hat{x}(t):=T^T\big (x(t)-\xi \big )\) is a solution of (4) for the initial condition \(T^T(x_0-\xi )\) and function \(\hat{u}(t):=T^T\big (u(t)+\mu \xi \big )\). Therefore, it suffices to prove the claim for the differential inclusion (4). Since \({{\mathrm{dom}}}(F)\ne \varnothing \), statement 2 of Proposition 2 implies that \({{\mathrm{rint}}}({{\mathrm{dom}}}(F))\ne \varnothing \). Then, it follows from [29, Thm. 12.43] that \(\hat{F}\) is a maximal monotone. Note that \({{\mathrm{dom}}}(\hat{F})=T^T\big ({{\mathrm{dom}}}(F)-\xi \big )\). Therefore, we have
It follows from Proposition 2 that
for all \(x\in {{\mathrm{cl}}}({{\mathrm{dom}}}(\hat{F}))\). This implies that
for all \(x\in {{\mathrm{dom}}}(\hat{F})\). Let \(\hat{x}\) be partitioned accordingly as \(\hat{x}={{\mathrm{col}}}(\hat{x}_1,\hat{x}_2)\). It follows from (5) that \(\hat{x}\in {{\mathrm{dom}}}(\hat{F})\) only if \(\hat{x}_2=0\). Define
Due to (5), there exists \(\hat{\xi }_1\) such that \({{\mathrm{col}}}(\hat{\xi }_1,0)\in {{\mathrm{rint}}}({{\mathrm{dom}}}(\hat{F}))\). Then, it follows from [29, Exercise 12.46] that \(\hat{F}_1\) is maximal monotone. Due to (6), we have
This means that \({{\mathrm{dom}}}(\hat{F})={{\mathrm{dom}}}(\hat{F}_1)\times \{0\}\). Note that by construction \({{\mathrm{int}}}({{\mathrm{dom}}}(\hat{F}_1))\) is non-empty. Let \(\hat{u}\) be partitioned accordingly as \(\hat{u}={{\mathrm{col}}}(\hat{u}_1,\hat{u}_2)\).
Then, the differential inclusion
admits a unique solution \(\hat{x}_1\) for each initial condition \(\hat{x}_{10}\in {{\mathrm{cl}}}({{\mathrm{dom}}}(\hat{F}_1))\) and locally integrable \(\hat{u}_1\). Together with (7), this implies that \({{\mathrm{col}}}(\hat{x}_1(t),0)\) is a solution of (4). In other words, for each \(\mu \geqslant 0\) there exists of the differential inclusion (4) for each \(\hat{x}_0\in {{\mathrm{dom}}}(\hat{F})\) and integrable function \(\hat{u}\). Uniqueness readily follows from maximal monotonicity of \(\hat{F}\). \(\square \)
2.4 Linear passive systems
A linear system \(\varSigma (A,B,C,D)\)
is said to be passive, if there exists a nonnegative-valued storage function \(V:\mathbb R^n\rightarrow \mathbb R_+\) such that the dissipation inequality
is satisfied for all \(0\leqslant t_1\leqslant t_2\) and for all trajectories \((z,x,w)\in L_{2,\mathrm {loc}}(\mathbb R_+,\mathbb R^m)\times AC_{\mathrm {loc}}(\mathbb R_+,\mathbb R^n)\times L_{2,\mathrm {loc}}(\mathbb R_+,\mathbb R^m)\) of the system (8).
The classical Kalman-Yakubovich-Popov lemma states that the system (8) is passive if, and only if, the linear matrix inequalities
admits a solution K. Moreover, \(V(x)=\frac{1}{2}x^TKx\) defines a storage function in case K is a solution the linear matrix inequalities (10).
In the following proposition, we summarize some of the consequences of passivity that will be used later. To formulate these consequences, we need to introduce some notation. For a subspace \(\mathcal {W}\subseteq \mathbb R^n\) and a linear mapping \(A\in \mathbb R^{n\times n}\), we denote the largest A-invariant subspace that is contained in \(\mathcal {W}\) by \(\langle \mathcal {W} \mid A \rangle \). It is well-known (see e.g. [34]) that \(\langle \mathcal {W} \mid A \rangle =\mathcal {W}\cap A^{-1}\mathcal {W}\cap \cdots \cap A^{-n+1}\mathcal {W}\).
Proposition 3
If \(\varSigma (A,B,C,D)\) is passive with the storage function \(x\mapsto \frac{1}{2}x^TKx\) then the following statements hold:
-
1.
D is positive semi-definite,
-
2.
\((KB-C^T)\ker (D+D^T)=\{0\}\),
-
3.
\((B^TK-C)\ker (A^TK+KA)=\{0\}\),
-
4.
\(\ker K\) is A-invariant, i.e. \(x\in \ker K\) implies that \(Ax\in \ker K\),
-
5.
\(\ker K\subseteq \langle \ker C \mid A \rangle \).
3 Linear systems coupled to relations
Consider the linear system
where \(x\in \mathbb R^n\) is the state, \(u\in \mathbb R^{n}\) is the input, and \((z,w)\in \mathbb R^{m+ m}\) are the external variables that satisfy
for some set-valued map \(M:\mathbb R^m\rightrightarrows \mathbb R^m\).
By solving z from the relations (11b) and (11c), we obtain the differential inclusion
where
and
In the sequel, we will be interested in the existence and uniqueness of solutions for (12) when the linear system \(\varSigma (A,B,C,D)\) is a passive system and M is maximal monotone. First, two examples of systems of the form (11) are in order.
Example 1
Consider the diode bridge circuit depicted in Fig. 1. This circuit consists of two linear resistors with resistances \(R_1>0\) and \(R_2>0\), one linear capacitor with capacitance \(C>0\), one linear inductor with inductance \(L>0\), one voltage source u, and four ideal diodes \(D_i\) with \(i=1,2,3,4\). One can derive the governing circuit equations in the form of (11) as follows:
Here \(x_1\) is the current through the inductor, \(x_2\) is the voltage across the capacitor and \((v_{D_i},i_{D_i})\) is the voltage-current pair associated to the diode \(D_i\). It can be verified that the linear system (15a)–(15b) is passive with the storage function \(x\mapsto \frac{1}{2}x^TKx\) where
and the set \(\{(z,w)\in \mathbb R^2\mid 0\geqslant z,\, w\geqslant 0,\,zw=0\}^4\) is the graph of the maximal monotone set-valued mapping M defined as
where the inequalities must be understood componentwise.
Remark 1
As noted above, subdifferentials of convex functions generate maximal monotone operators, but not all maximal monotone operators are of this form. In fact it was shown by Rockafellar [35, Thm. B] that a maximal monotone operator is the subdifferential of a proper convex lower semicontinuous mapping if and only if it satisfies the property of cyclic monotonicity. An example of a mapping that is maximal monotone but not cyclically monotone is the linear mapping M from \({\mathbb R}^2\) to \({\mathbb R}^2\) defined by
(cf. [36, Example 2.23]). The matrix above defines the voltage-current relationship of a gyrator [37]. Interconnections of linear passive electrical networks with gyrators, rather than with diodes as in the example above, therefore provide examples of linear passive systems coupled to maximal monotone mappings that are not subdifferentials.
Example 2
A simple deterministic queueing model with continuous flows may be constructed as follows. Consider n servers working in parallel for a single user. The cost of using server j is proportional to the queue length associated to this server; this quantity in turn is determined by the load that has been placed on the server previously and on the processing speed of the server, which we will here assume to be constant. Loads and queue lengths cannot be negative. The total load is distributed by the user among the servers according to the Wardrop principle, which means that no load is placed on servers when there are other servers which have lower cost. The total load is chosen by the user as a non-increasing function of the realized cost. Introduce the following notation:
- \(c_j\) :
-
processing speed of j-th server
- \(x_j(t)\) :
-
queue length of j-th server at time t
- \(v_j(t)\) :
-
auxiliary variable relating to nonnegativity of queue lengths
- \(y_j(t)\) :
-
auxiliary variable relating to nonnegativity of queue lengths
- \(e_j(t)\) :
-
cost of j-th server at time t in excess of realized (i.e. minimal) cost
- \(k_j\) :
-
positive proportionality constant linking queue length to cost
- \(\ell _j(t)\) :
-
load placed on server j at time t
- s(t):
-
total load at time t
- a(t):
-
realized cost at time t
- \(f(\cdot )\) :
-
constitutive relation linking realized cost to total load.
We can then write equations as follows:
The equations (16a) and (16e) together ensure that queue lengths are indeed always nonnegative; the Wardrop principle is expressed by (16f). The relations (16a–16d) above can be written in vector form as follows, with \(K := \text {diag}(k_1,\dots ,k_n)\):
The relations (16e–16g) constitute the negative of a maximal monotone set-valued mapping, while the linear input-output system given by (2) is passive (even conservative) with respect to the storage function \(x\mapsto \frac{1}{2}x^TKx\). The example can be generalized in several ways, for instance to situations with multiple users.
4 Main results
Maximal monotonicity of the set-valued mapping H as defined in (13) will play a key role in our development. The following theorem asserts that H is maximal monotone if the underlying linear system is passive and the set-valued mapping M is maximal monotone.
Theorem 2
Suppose that
-
i.
\(\varSigma (A,B,C,D)\) is passive with the storage function \(x\mapsto \frac{1}{2}x^Tx\),
-
ii.
M is maximal monotone, and
-
iii.
\({{\mathrm{im}}}C\cap {{\mathrm{rint}}}({{\mathrm{im}}}(M+D))\ne \varnothing \).
Then, the set-valued mapping H defined in (13) is maximal monotone.
Proof
The proof is based on the application of Proposition 2 to H.
-
1.
H is monotone:
Take \(x_1,x_2\in {{\mathrm{dom}}}(H)=C^{-1}({{\mathrm{im}}}(M+D))\ne \varnothing \) and let \(y_i\in H(x_i)\) for \(i=1,2\). Then,
$$\begin{aligned} \langle x_1-x_2 , y_1-y_2 \rangle =\langle x_1-x_2 , -A(x_1-x_2)+B(z_1-z_2) \rangle \end{aligned}$$(18)where \(z_i\in (M+D)^{-1}(Cx_i)\) for \(i=1,2\). Since \(\varSigma (A,B,C,D)\) is passive with the positive definite storage function \(x\mapsto \frac{1}{2}x^Tx\), we have
$$\begin{aligned} \begin{bmatrix}-A^T-A&\,\,\, B-C^T \\ B^T-C&\,\,\, D+D^T \end{bmatrix}\geqslant 0. \end{aligned}$$(19)This would imply
$$\begin{aligned} \begin{bmatrix}-A^T-A&\,\,\, -B+C^T \\ -B^T+C\,\,\,&D+D^T \end{bmatrix}= \begin{bmatrix}I&\quad 0\\0&\quad -I\end{bmatrix}\begin{bmatrix}-A^T-A&\,\,\, B-C^T \\ B^T-C&\,\,\, D+D^T \end{bmatrix}\begin{bmatrix}I&\quad 0\\0&\quad -I\end{bmatrix}\geqslant 0.\nonumber \\ \end{aligned}$$(20)Therefore, it follows from (18) that
$$\begin{aligned} \langle x_1-x_2 , y_1-y_2 \rangle \geqslant \langle z_1-z_2 , C(x_1-x_2)-D(z_1-z_2) \rangle . \end{aligned}$$(21)From \(z_i\in (M+D)^{-1}(Cx_i)\), we get \(Cx_i-Dz_i\in M(z_i)\). Since M is monotone, we have
$$\begin{aligned} \langle z_1-z_2 , C(x_1-x_2)-D(z_1-z_2) \rangle \geqslant 0. \end{aligned}$$(22)Then, it follows from (21) that H is monotone.
-
2.
there exists a convex set \(S_{H}\) such that \(S_{H}\subseteq {{\mathrm{dom}}}(H)\subseteq {{\mathrm{cl}}}(S_{H})\):
Let \(P=(M+D)^{-1}\). Since \(\varSigma (A,B,C,D)\) is passive, it follows from (10) that D is positive semi-definite and hence induces a maximal monotone single-valued mapping whose domain is the entire \(\mathbb R^m\). Then, [29, Cor. 12.44] implies that \(M+D\) is maximal monotone and [29, Ex. 12.8] implies that P is maximal monotone. Note that \({{\mathrm{dom}}}(P)={{\mathrm{im}}}(M+D)\). Due to Proposition 2, there exists a convex set \(S_P\) such that
$$\begin{aligned} S_P\subseteq {{\mathrm{dom}}}(P)\subseteq {{\mathrm{cl}}}(S_P). \end{aligned}$$(23)Moreover, it follows from [29, Thm. 12.41] that one can take \(S_P={{\mathrm{rint}}}({{\mathrm{cl}}}({{\mathrm{dom}}}(P)))\). Since \({{\mathrm{dom}}}(H)=C^{-1}({{\mathrm{dom}}}(P))\), it follows from (23) that
$$\begin{aligned} C^{-1}(S_P)\subseteq {{\mathrm{dom}}}(H)\subseteq C^{-1}({{\mathrm{cl}}}(S_P)). \end{aligned}$$(24)Define \(S_{H}=C^{-1}(S_P)\). Since \(S_P\) is convex, so is \(S_H\). It follows from statement 1 of Proposition 1 that \(S_P={{\mathrm{rint}}}({{\mathrm{dom}}}(P))\). As \({{\mathrm{im}}}C\cap {{\mathrm{rint}}}({{\mathrm{im}}}(M+D))\ne \varnothing \) and \({{\mathrm{rint}}}({{\mathrm{im}}}(M+D))={{\mathrm{rint}}}({{\mathrm{dom}}}(P))=S_P\), statement 2 of Proposition 1 implies that \(C^{-1}({{\mathrm{cl}}}(S_P))={{\mathrm{cl}}}(C^{-1}(S_P))={{\mathrm{cl}}}(S_H)\). Consequently, we get
$$\begin{aligned} S_{H}\subseteq {{\mathrm{dom}}}(H) \subseteq {{\mathrm{cl}}}(S_{H}). \end{aligned}$$(25)from (24).
-
3.
\(H(\xi )\) is convex for all \(\xi \in {{\mathrm{dom}}}(H)\):
Due to Proposition 2, \((M+D)^{-1}(C\xi )\) is a convex set for all \(\xi \in {{\mathrm{dom}}}(H)\). Hence, so is \(H(\xi )=-A\xi +B(M+D)^{-1}(C\xi )\).
-
4.
\({{\mathrm{cl}}}({{\mathrm{dom}}}(H))\) is convex and \((H(\xi ))_{\infty }=N_{{{\mathrm{cl}}}({{\mathrm{dom}}}(H))(\xi )}\) for all \(\xi \in {{\mathrm{dom}}}(H)\):
It follows from (25) that
$$\begin{aligned} {{\mathrm{cl}}}({{\mathrm{dom}}}(H))={{\mathrm{cl}}}(S_{H}) \end{aligned}$$(26)Since \(S_{H}\) is convex, so is \({{\mathrm{cl}}}({{\mathrm{dom}}}(H))\). We know from [29, Ex. 3.12] that
$$\begin{aligned} (H(\xi ))_\infty = (BP(C\xi ))_{\infty } \end{aligned}$$(27)for all \(\xi \in {{\mathrm{dom}}}(H)\). We claim that
$$\begin{aligned} (BP(C\xi ))_{\infty }=(C^TP(C\xi ))_{\infty } \end{aligned}$$(28)for all \(\xi \in {{\mathrm{dom}}}(H)\). To prove this, let \(\zeta _B\in (BP(C\xi ))_{\infty }\) for some \(\xi \in {{\mathrm{dom}}}(H)\). Then, there exist sequences \(\zeta _B^\nu \) and \(\lambda ^\nu \) such that
$$\begin{aligned}&\zeta _B^\nu \in BP(C\xi ) \end{aligned}$$(29a)$$\begin{aligned}&\lambda ^\nu \rightarrow 0 \text { as }\nu \rightarrow \infty \end{aligned}$$(29b)$$\begin{aligned}&\lambda ^\nu \zeta _B^\nu \rightarrow \zeta _B. \end{aligned}$$(29c)From 29a–29c, we know that for all \(\nu \)
$$\begin{aligned} \zeta _B^\nu =B\eta ^\nu \end{aligned}$$(30)for some \(\eta ^\nu \in P(C\xi )\). Thus, we get
$$\begin{aligned} C\xi \in P^{-1}(\eta ^\nu )=(M+D)\eta ^\nu . \end{aligned}$$(31)This means that
$$\begin{aligned} C\xi -D\eta ^\nu \in M(\eta ^\nu ). \end{aligned}$$(32)For each \(\nu _1\) and \(\nu _2\), one gets
$$\begin{aligned} (\eta ^{\nu _1}-\eta ^{\nu _2})^T[(C\xi -D\eta ^{\nu _1})-(C\xi -D\eta ^{\nu _2})]\geqslant 0 \end{aligned}$$(33)as M is maximal monotone. This would yield
$$\begin{aligned} (\eta ^{\nu _1}-\eta ^{\nu _2})^TD(\eta ^{\nu _1}-\eta ^{\nu _2})\leqslant 0. \end{aligned}$$(34)Since D is positive semi-definite due to passivity, we get \(\eta ^{\nu _1}-\eta ^{\nu _2}\in \ker (D+D^T)\), i.e.
$$\begin{aligned} (D+D^T)\eta ^{\nu _1}=(D+D^T)\eta ^{\nu _2}. \end{aligned}$$(35)Then, one can find \(\tilde{\eta }\) such that for all \(\nu \)
$$\begin{aligned} \eta ^\nu =\tilde{\eta }+\bar{\eta }^\nu \end{aligned}$$(36)for some \(\bar{\eta }^\nu \in \ker (D+D^T)\). Define
$$\begin{aligned} \zeta _C^\nu =C^T\eta ^\nu . \end{aligned}$$(37)Note that
$$\begin{aligned} \zeta _C^\nu \in C^TP(C\xi ) \end{aligned}$$(38)and
$$\begin{aligned} \zeta _C^\nu -\zeta _B^\nu =(C^T-B)\tilde{\eta } \end{aligned}$$(39)since \(Bv=C^Tv\) whenever \(v\in \ker (D+D^T)\) due to the second statement of Proposition 3 and \(K=I\). Clearly,
$$\begin{aligned} \lambda ^\nu \zeta _C^\nu \rightarrow \zeta _B. \end{aligned}$$(40)Consequently, \(\zeta _B\in (C^TP(C\xi ))_\infty \), i.e.,
$$\begin{aligned} (BP(C\xi ))_\infty \subseteq (C^TP(C\xi ))_\infty . \end{aligned}$$(41)The same arguments are still valid if we swap B and \(C^T\). Therefore, (28) holds. From (27), we get
$$\begin{aligned} (H(\xi ))_\infty = (C^TP(C\xi ))_{\infty } \end{aligned}$$(42)Now, we have
$$\begin{aligned} (H(\xi ))_\infty= & {} (C^TP(C\xi ))_{\infty } \end{aligned}$$(43)$$\begin{aligned}\supseteq & {} C^T(P(C\xi ))_{\infty }\quad [\text {from 2b of Proposition 1}] \end{aligned}$$(44)$$\begin{aligned}= & {} C^TN_{{{\mathrm{cl}}}({{\mathrm{dom}}}(P))}(C\xi )\quad [\text {from 4 of Proposition 2}] \end{aligned}$$(45)$$\begin{aligned}= & {} N_{C^{-1}({{\mathrm{cl}}}({{\mathrm{dom}}}(P)))}(\xi )\quad [\text {from 2c of Proposition 1}]. \end{aligned}$$(46)To show the reverse inclusion, let \(\zeta \in (H(\xi ))_\infty \). From (42), we know that there exist sequences \(\zeta ^\nu \), \(\lambda ^\nu \) such that
$$\begin{aligned}&\displaystyle \zeta ^\nu \in C^TP(C\xi )\end{aligned}$$(47a)$$\begin{aligned}&\displaystyle \lambda ^\nu \rightarrow 0 \text { as } \nu \rightarrow \infty \end{aligned}$$(47b)$$\begin{aligned}&\displaystyle \lambda ^\nu \zeta ^\nu \rightarrow \zeta . \end{aligned}$$(47c)Let \(\eta ^\nu \) be such that \(\eta ^\nu \in P(C\xi )\) and \(\zeta ^\nu =C^T\eta ^\nu \). Also let \(\bar{\eta }\in P(C\bar{\xi })\) for some \(\bar{\xi }\in {{\mathrm{dom}}}(H)=C^{-1}({{\mathrm{cl}}}({{\mathrm{dom}}}(P)))\). From maximal monotonicity of P, we have
$$\begin{aligned} 0\leqslant \langle \bar{\eta }-\eta ^\nu , C(\bar{\xi }-\xi ) \rangle =\langle C^T(\bar{\eta }-\eta ^\nu ) , \bar{\xi }-\xi \rangle . \end{aligned}$$(48)By multiplying \(\lambda ^\nu \) and taking the limit as \(\lambda ^\nu \) tends to zero, we get
$$\begin{aligned} \langle \zeta , \bar{\xi }-\xi \rangle \leqslant 0. \end{aligned}$$(49)Thus, \(\zeta \in N_{C^{-1}({{\mathrm{cl}}}({{\mathrm{dom}}}(P)))}(\xi )\), i.e.,
$$\begin{aligned} (H(\xi ))_\infty \subseteq N_{C^{-1}({{\mathrm{cl}}}({{\mathrm{dom}}}(P)))}(\xi ). \end{aligned}$$(50) -
5.
\({{\mathrm{graph}}}(H)\) is closed:
Let \((x^\nu ,y^\nu )\) be a convergent sequence in \({{\mathrm{graph}}}(H)\). Then, for each \(\nu \) there exists \(z^\nu \in (M+D)^{-1}(Cx^\nu )\) such that \(y^\nu =-Ax^\nu +Bz^\nu \). Let
$$\begin{aligned} \lim _{\nu \rightarrow \infty }\,\,(x^\nu ,-Ax^\nu +Bz^\nu )=(\xi ,-A\xi +B\zeta ). \end{aligned}$$(51)It is enough to show that \((\xi ,-A\xi +B\zeta )\in {{\mathrm{graph}}}(H)\). To do so, let \(\mathcal {W}\) be the smallest subspace that contains \({{\mathrm{im}}}(M+D)={{\mathrm{dom}}}((M+D)^{-1})\). It follows from maximal monotonicity of \((M+D)^{-1}\) that for each \(\nu \)
$$\begin{aligned} z+z^\nu \in (M+D)^{-1}(Cx^\nu ) \end{aligned}$$(52)holds for any \(z\in \mathcal {W}^\perp \). Now, let \(z^\nu =z^\nu _1+z^\nu _2\) where \(z^\nu _1\in \ker B\cap \mathcal {W}^\perp \) and
$$\begin{aligned} z^\nu _2\in (\ker B\cap \mathcal {W}^\perp )^\perp ={{\mathrm{im}}}B^T+\mathcal {W}. \end{aligned}$$(53)Note that
$$\begin{aligned} Bz^\nu =Bz^\nu _2. \end{aligned}$$(54)From (52), we have \(z^\nu _2\in (M+D)^{-1}(Cx^\nu )\). In view of (51) and (54), it is enough to show that the sequence \(z_2^\nu \) is bounded. On the contrary, suppose that \(z_2^\nu \) is unbounded. Without loss of generality, we can assume that the sequence \(\frac{z_2^\nu }{\Vert z_2^\nu \Vert }\) converges. Define
$$\begin{aligned} \zeta _\infty =\lim _{\nu \rightarrow \infty } \frac{z_2^\nu }{\Vert z_2^\nu \Vert }. \end{aligned}$$(55)It follows from (51) and (54) that
$$\begin{aligned} \lim _{\nu \rightarrow \infty }Bz_2^\nu =B\zeta . \end{aligned}$$(56)Thus, we get
$$\begin{aligned} \zeta _\infty \in \ker B. \end{aligned}$$(57)Due to passivity with \(K=I\) and monotonicity of \((M+D)^{-1}\), we have
$$\begin{aligned} \langle x^\nu -x , -A(x^\nu -x)+B(z_2^\nu -z) \rangle \geqslant \langle z_2^\nu -z , C(x^\nu -x)-D(z_2^\nu -z) \rangle \geqslant 0\nonumber \\ \end{aligned}$$(58)for all \(z\in (M+D)^{-1}(Cx)\) with \(x\in {{\mathrm{dom}}}(H)\). By dividing by \(\Vert z_2^\nu \Vert ^2\) and taking the limit as \(\nu \) tends to infinity, we obtain
$$\begin{aligned} \langle \zeta _\infty , D\zeta _\infty \rangle \leqslant 0. \end{aligned}$$(59)Since D is positive semi-definite due to the first statement of Proposition 3, this results in
$$\begin{aligned} \zeta _\infty \in \ker (D+D^T). \end{aligned}$$(60)Then, it follows from (57), \(K=I\), and the second statement of Proposition 3 that
$$\begin{aligned} \zeta _\infty \in \ker C^T. \end{aligned}$$(61)Let \(\eta \in {{\mathrm{im}}}(M+D)\) and \(\zeta \in (M+D)^{-1}(\eta )\). From monotonicity of \((M+D)^{-1}\), we have
$$\begin{aligned} \left\langle {\frac{z_2^\nu -\zeta }{\Vert z_2^\nu \Vert }}{Cx^\nu -\eta }\right\rangle \geqslant 0. \end{aligned}$$(62)Taking the limit as \(\nu \) tends to infinity, we obtain
$$\begin{aligned} \langle \zeta _\infty , Cx-\eta \rangle =\langle \zeta _\infty , -\eta \rangle \geqslant 0. \end{aligned}$$(63)This means that the hyperplane \({{\mathrm{span}}}(\{\zeta _\infty \})^\perp \) separates the sets \({{\mathrm{im}}}C\) and \({{\mathrm{im}}}(M+D)\). Since \({{\mathrm{im}}}C={{\mathrm{rint}}}({{\mathrm{im}}}C)\) and \({{\mathrm{im}}}C\cap {{\mathrm{rint}}}({{\mathrm{im}}}(M+D))\ne \varnothing \), it follows from [38, Thm. 11.3] that \({{\mathrm{im}}}C\) and \({{\mathrm{im}}}(M+D)\) cannot be properly separated. Therefore, both \({{\mathrm{im}}}C\) and \({{\mathrm{im}}}(M+D)\) must be contained in the hyperplane \({{\mathrm{span}}}(\{\zeta _\infty \})^\perp \). Since \(\mathcal {W}\) is the smallest subspace that contains \({{\mathrm{im}}}(M+D)\), we get \(\mathcal {W}\subseteq {{\mathrm{span}}}(\{\zeta _\infty \})^\perp \) which implies \(\zeta _\infty \in \mathcal {W}^\perp \). Together with (57), we get
$$\begin{aligned} \zeta _\infty \in \ker B\cap \mathcal {W}^\perp . \end{aligned}$$In view of (53) and (55), this yields \(\zeta _\infty =0\). This, however, clearly contradicts with (55) which implies \(\Vert \zeta _\infty \Vert =1\). Therefore, \(\Vert z_2^\nu \Vert \) must be bounded.
Then, it follows from Proposition 2 that H is maximal monotone. \(\square \)
Remark 2
It is well-known that maximal monotonicity is preserved under certain operations such as addition [29, Cor. 12.44] and piecewise affine transformations [29, Thm. 12.43]. None of these results immediately imply that the set-valued mapping H of the form (13) is maximal monotone when \(\varSigma (A,B,C,D)\) is passive and M is maximal monotone. As such, Theorem 2 can be considered as a particular result on maximal monotonicity preserving operations.
Well-posedness of systems of the form (11) and their variants has been addressed in several papers [30, 31, 39–41] for linear passive (or passive-like) systems and maximal monotone mappings. However, the relevant results appeared in these papers require extra conditions on the linear system and/or the maximal monotone mapping. The following theorem provides conditions for the existence and uniqueness of solutions to the differential inclusion (12) when the linear system \(\varSigma (A,B,C,D)\) is passive and the set-valued map M is maximal monotone without requiring any additional conditions.
Theorem 3
Suppose that
-
i.
\(\varSigma (A,B,C,D)\) is passive with the storage function \(x\mapsto \frac{1}{2}x^TKx\) where K is positive definite,
-
ii.
M is maximal monotone, and
-
iii.
\({{\mathrm{im}}}C\cap {{\mathrm{rint}}}({{\mathrm{im}}}(M+D))\ne \varnothing \).
Then, for each initial condition \(x_0\) such that \(Cx_0\in {{\mathrm{cl}}}({{\mathrm{im}}}(M+D))\) and locally integrable function u, the differential inclusion (12) admits a unique solution.
Proof
By hypothesis, \(\varSigma (A,B,C,D)\) is passive with a positive definite storage function \(x\mapsto \frac{1}{2}x^TKx\). By defining \(\tilde{x}=K^{-1/2}x\), we can rewrite the differential inclusion (12) as
where
Clearly, \(x\mapsto K^{-1/2}x\) is a bijection between the solutions of (12) and those of (64). Furthermore, it can be easily verified that \(\varSigma (\tilde{A},\tilde{B},\tilde{C},D)\) is passive with the storage function \(x\mapsto \frac{1}{2}x^Tx\). As such, we can assume, without loss of generality, \(x\mapsto \frac{1}{2}x^Tx\) is a positive definite storage function for the system \(\varSigma (A,B,C,D)\).
Then, it follows from Theorem 2 that H is maximal monotone. Therefore, the claim follows from Theorem 1 with \(\mu =0\). \(\square \)
Remark 3
Theorem 3 recovers Lemma 1 of [30] as a special case: \(u=0\), \(D=0\), M is the subgradient of a convex lower semicontinuous function, (A, B, C) is a minimal triple, and \(\varSigma (A,B,C,D)\) has a strictly positive real transfer matrix (a stronger notion than passivity).
Remark 4
In order to apply Theorem 3 to Example 1, note that \(\varSigma (A,B,C,D)\) constitutes a passive system as discussed in the example. Clearly, M is maximal monotone. Finally, it follows from [8, Cor.3.8.10] that \({{\mathrm{im}}}(M+D)=\mathbb R_+\times \mathbb R\times \mathbb R\times \mathbb R_+\). As such, we have
Next, we present two extensions of Theorem 3. The first one deals with systems which are not passive themselves but can be made passive by shifting the eigenvalues of the matrix A.
Corollary 1
Suppose that
-
i.
\(\varSigma (A-\alpha I,B,C,D)\) is passive for some \(\alpha \geqslant 0\) with the storage function \(x\mapsto \frac{1}{2}x^TKx\) where K is positive definite,
-
ii.
M is maximal monotone, and
-
iii.
\({{\mathrm{im}}}C\cap {{\mathrm{rint}}}({{\mathrm{im}}}(M+D))\ne \varnothing \).
Then, the differential inclusion (12) admits a unique solution for each initial condition \(x_0\) such that \(Cx_0\in {{\mathrm{cl}}}({{\mathrm{im}}}(M+D))\) and locally integrable function u.
Proof
The proof readily follows from Theorems 2 and 1 with \(\mu =\alpha \). \(\square \)
Remark 5
In case D is positive semi-definite and there exists a positive definite matrix K such that \(KB=C^T\), one can always find a positive number \(\alpha \) such that \(\varSigma (A-\alpha I,B,C,D)\) is passive. As such, Theorem 2 of [31] can be recovered as a special case from Corollary 1.
The second extension deals with the case of positive semi-definite storage functions. To formulate this result, we need to introduce some nomenclature. For a maximal monotone set-valued mapping F, the element of minimal norm of F(x) will be denoted by \(F^o(x)\).
Corollary 2
Suppose that
-
i.
\(\varSigma (A-\alpha I,B,C,D)\) is passive for some \(\alpha \geqslant 0\),
-
ii.
M is maximal monotone,
-
iii.
\({{\mathrm{im}}}C\cap {{\mathrm{rint}}}({{\mathrm{im}}}(M+D))\ne \varnothing \), and
-
iv.
there exists a positive real number \(\alpha \) such that
$$\begin{aligned} \Vert \big ((M+D)^{-1}\big )^o(w) \Vert \leqslant \alpha (1+\Vert w \Vert ) \end{aligned}$$(65)for all \(w\in {{\mathrm{im}}}(M+D)\).
Then, the differential inclusion (12) admits a solution for each initial condition \(x_0\) such that \(Cx_0\in {{\mathrm{cl}}}({{\mathrm{im}}}(M+D))\) and locally integrable function u. Moreover, if x and \(\tilde{x}\) are two solutions for the same initial condition and locally integrable function u then \(Kx=K\tilde{x}\).
Proof
When K is positive definite, Corollary 1 readily implies the claim. Suppose that K is positive semi-definite but not positive definite. Then, one can change the coordinates in such a way that
Suppose that A, B, and C matrices are given by
accordingly to the partition of K. Then, the linear matrix inequalities (10) imply that \(A_{12}=0\), \(C_2=0\), and \(\varSigma (A_{11}-\alpha I,B_1,C_1,D)\) is passive with positive definite storage function \(x_1\mapsto \frac{1}{2}x_1^Tx_1\). Note that the differential inclusion (12) is given by
in the new coordinates. Also note that
and \({{\mathrm{im}}}C={{\mathrm{im}}}C_1\) in the new coordinates. Then, it follows from Corollary 1 that the differential inclusion (66) admits a unique solution for each initial condition \(x_{10}\) and locally integrable function \(u_1\). Since \(x_1\) is locally absolutely continuous, it follows from (65) that the function \(t\mapsto \big ((M+D)^{-1}\big )^o(C_1x_1(t))\) is locally integrable. Hence, the differential inclusion (67) admits a solution for each initial condition \(x_{20}\) and locally integrable function \(u_2\). Therefore, we proved the existence of solutions as claimed. The rest follows from the uniqueness of \(x_1\). \(\square \)
In general, checking the existence of an \(\alpha \geqslant 0\) such that \(\varSigma (A-\alpha I,B,C,D)\) is passive amounts to checking the feasibility of the matrix inequalities
Note that these matrix inequalities do not constitute linear matrix inequalities and cannot be verified easily. However, the particular structure of these matrix inequalities lead to easily verifiable algebraic necessary and sufficient conditions for their feasibility. To present these conditions, we need to introduce some notation. For a matrix \(A\in \mathbb R^{n\times n}\) and two subspaces \(\mathcal {V}\), \(\mathcal {W}\subseteq \mathbb R^n\), we define
Subspaces satisfying the property above have been studied in geometric linear control theory under the name of conditioned invariant subspaces (see e.g. [34]). It is well-known that the set \(\mathscr {T}(A,\mathcal {V},\mathcal {W})\) is closed under subspace intersection. As such, there always exists a minimal element, say \(\mathcal {T}^*(A,\mathcal {V},\mathcal {W})\) such that
Moreover, one can devise a subspace algorithm (see e.g. [34]) which would return the minimal subspace in a finite number of steps for a given triple \((A,\mathcal {V},\mathcal {W})\).
The following lemma on positive semi-definite solutions of matrix equations, taken partly from [42], will be needed in the proof of the theorem below.
Lemma 1
If the equation \(YK=X\), where Y and X are given matrices, has a symmetric and positive semi-definite solution, then the general form of such solutions is
where U is an arbitrary symmetric and positive semi-definite matrix, and \(Z^-\) denotes a generalized inverse of the matrix Z, i.e. \(ZZ^-Z=Z\). For the solution as given above, we have
Proof
The first part of the lemma is given by [42, Thm. 2.2]. Now, let \(K=K^T \geqslant 0\) be as in (69). Under the conditions of the lemma we have \({{\mathrm{rank}}}XY^T = {{\mathrm{rank}}}X\) as noted in the proof of the cited theorem, and consequently \(\ker YX^T = \ker X^T\). It follows that the subspaces \({{\mathrm{im}}}X^T\) and \(\ker Y\) intersect trivially. This implies that
The generalized inverse \((XY^T)^-\) can be taken to be symmetric and positive semi-definite as noted in [42], which entails \(\ker X^T(XY^T)^-X = \ker (XY^T)^-X\). Moreover, since \({{\mathrm{rank}}}XY^T = {{\mathrm{rank}}}X\), any element of the column span of X can be written in the form \(XY^Tv\). Since \((XY^T)^-XY^Tv=0\) implies \(XY^Tv=0\), it follows that \(\ker (XY^T)^-X = \ker X\) and (70) is shown. \(\square \)
Now, we are in a position to provide necessary and sufficient conditions for the feasibility of the matrix inequalities (68).
Theorem 4
Let \(E\in \mathbb R^{m\times p}\) be a full column rank matrix such that \({{\mathrm{im}}}E=\ker (D+D^T)\). Then, the following statements are equivalent:
-
1.
There exists \(\alpha \geqslant 0\) such that \(\varSigma (A-\alpha I,B,C,D)\) is passive.
-
2.
The following conditions hold:
-
(a)
D is positive semi-definite,
-
(b)
\({{\mathrm{im}}}E^TCBE={{\mathrm{im}}}E^TC\),
-
(c)
\(E^TCBE\) is symmetric and positive semi-definite,
-
(d)
\(A\big (\ker E^TC\cap \mathcal {T}^*(A,\ker E^TC,{{\mathrm{im}}}BE)\big )\subseteq \ker E^TC\), and
-
(e)
\(\ker E^TC\cap \mathcal {T}^*(A,\ker E^TC,{{\mathrm{im}}}BE)\subseteq \ker C\).
-
(a)
1 \(\Rightarrow \) 2: The condition 2a readily follows from Proposition 3. Let K be a solution of the matrix inequalities (68). Proposition 3 implies that \(KBE=C^TE\). Then, [42, Thm. 2.2] implies that the conditions 2b and 2c hold and K must be of the form (69) where \(X=E^TC\), \(Y=E^TB^T\). Since \(\ker K\) is A-invariant due to Proposition 3, we have in view of the lemma above
Since \(\ker (I-Z^-Z)^T={{\mathrm{im}}}Z^T\) for any matrix Z and any generalized inverse \(Z^-\) of Z, we have \(\ker (I-Y^-Y)^T={{\mathrm{im}}}Y^T={{\mathrm{im}}}BE\) and hence
The subspace inclusions (72) and (73) imply that
Therefore, \(\mathcal {T}^*(A,\ker E^TC,{{\mathrm{im}}}BE)\subseteq \ker \big (U(I-Y^-Y)^T\big )\). Then, the condition 2d follows from (72). Since \(\ker K\subseteq \ker C\) due to Proposition 3, the condition 2e follows from (70).
2 \(\Rightarrow \) 1: We first prove that there exists a symmetric positive semi-definite matrix K such that
-
i.
\(KBE=C^TE\),
-
ii.
\(\ker K\) is A-invariant, and
-
iii.
\(\ker K\subseteq \ker C\).
Existence of a symmetric and positive semi-definite matrix K satisfying the condition (i) follows from [42, Thm. 2.2] together with the relations 2b and 2c. Moreover, [42, Thm. 2.2] implies that any such matrix K must be of the form (69). Since \({{\mathrm{im}}}BE\subseteq \mathcal {T}^*(A,\ker E^TC,{{\mathrm{im}}}BE)\) and \(\ker (I-Y^-Y)^T={{\mathrm{im}}}Y^T={{\mathrm{im}}}BE\), there exists a matrix N such that
Let \(U=N^TN\). Clearly, U is symmetric and positive semi-definite. Note that
Then, it follows from (70) that
On the one hand, we have
from the definition of \(\mathcal {T}^*(A,\ker E^TC,{{\mathrm{im}}}BE)\). On the other hand, we have
from the condition 2d. The last two inclusions imply that this choice of U and hence K satisfies the condition (ii) whereas the condition 2e readily implies that (iii) is satisfied as well. The last step of the proof is to show that there exists a real number \(\alpha \geqslant 0\) such that
To this end, we can assume, without loss of generality, that the matrices A, K, B, C, and \(D+D^T\) are of the forms
where \(A_{ij}\in \mathbb R^{n_i\times n_j}\), \(K_1\in \mathbb R^{n_1\times n_1}\), \(B_{ij}\in \mathbb R^{n_i\times m_j}\), \(C_{ij}\in \mathbb R^{m_i\times n_j}\), \(D_1\in \mathbb R^{m_1\times m_1}\), \(n_1+n_2=n\), \(m_1+m_2=m\), and both \(K_1\) and \(D_1\) are symmetric and positive definite matrices. Note that the structure of A and C follows from the conditions (ii) and (iii). Also note that the condition (i) boils down to \(K_1B_{12}=C_{21}^T\). Then, we have
It follows from positive definiteness of both \(K_1\) and \(D_1\) that there exists \(\alpha \geqslant 0\) such that (80) holds. \(\square \)
5 Concluding remarks
In this paper, we have shown that the interconnection of a linear system with a static set-valued relation is well-posed in the sense of existence and uniqueness of solutions whenever the underlying linear system is passive and the static relation is maximal monotone. Similar well-posedness results have already appeared in the literature with extra conditions on the linear systems as well as the static relations. Removing those extra conditions requires employing a completely different set of arguments (and hence tools). Based on the recent characterisations of maximal monotonicity, we have shown that such interconnections can be represented by differential inclusions with maximal monotone set-valued mappings. As such, the classical well-posedness results for such differential inclusions can be immediately applied to the class of systems at the hand. As it has already been observed in the literature earlier, well-posedness results can be established under weaker requirements on the linear system than passivity. One such particular property is the so-called passivity by pole shifting. As a side result, we have also provided geometric necessary and sufficient conditions for passivity by pole shifting.
References
Stampacchia, G.: Formes bilineaires coercitives sur les ensembles convexes. Comptes rendus hebdomadaires des séances de l’Académie des sciences 258, 4413–4416 (1964)
Kinderlehrer, D., Stampacchia, G.: An Introduction to Variational Inequalities and Their Applications. Academic Press, New York (1980)
Facchinei, F., Pang, J.S.: Finite-Dimensional Variational Inequalities and Complementarity Problems. Springer, New York (2003)
Minty, G.J.: On the maximal domain of a “monotone” function. Mich. Math. J. 8, 135–137 (1961)
Minty, G.J.: Monotone networks. Proc. R. Soc. Lond. Ser. A 257, 194–212 (1960)
Crandall, M.G., Pazy, A.: Semi-groups of nonlinear contractions and dissipative sets. J. Funct. Anal. 3, 376–418 (1969)
Brézis, H.: Operateurs Maximaux Monotones. North-Holland, Amsterdam (1973)
Cottle, R.W., Pang, J.S., Stone, R.E.: The Linear Complementarity Problem. Academic Press, Boston (1992)
van der Schaft, A.J., Schumacher, J.M.: Complementarity modelling of hybrid systems. IEEE Trans. Autom. Control 43(4), 483–490 (1998)
Heemels, W.P.M.H., Schumacher, J.M., Weiland, S.: Linear complementarity systems. SIAM J. Appl. Math. 60(4), 1234–1269 (2000)
Pang, J.S., Stewart, D.E.: Differential variational inequalities. Math. Program. 113(2), 345–424 (2008)
Heemels, W.P.M.H., Camlibel, M.K., Schumacher, J.M.: On the dynamic analysis of piecewise-linear networks. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 49(3), 315–327 (2002)
Camlibel, M.K., Heemels, W.P.M.H., van der Schaft, A.J., Schumacher, J.M.: Switched networks and complementarity. IEEE Trans. Circuits Syst. I Fundam. Theory Appl. 50(8), 1036–1046 (2003)
Vasca, F., Iannelli, L., Camlibel, M.K., Frasca, R.: A new perspective for modeling power electronics converters: complementarity framework. IEEE Trans. Power Electron. 24(2), 456–468 (2009)
Addi, K., Brogliato, B., Goeleven, D.: A qualitative mathematical analysis of a class of linear variational inequalities via semi-complementarity problems: applications in electronics. Math. Programm. 126(1), 31–67 (2011)
Adly, S., Outrata, J.V.: Qualitative stability of a class of non-monotone variational inclusions. Application in electronics. J. Convex Anal. 20(1), 43–66 (2013)
Lootsma, Y.J., van der Schaft, A.J., Camlibel, M.K.: Uniqueness of solutions of relay systems. Automatica 35(3), 467–478 (1999)
Pogromsky, A.Y., Heemels, W.P.M.H., Nijmeijer, H.: On solution concepts and well-posedness of linear relay systems. Automatica 39, 2139–2147 (2003)
Camlibel, M.K., Schumacher, J.M.: Existence and uniqueness of solutions for a class of piecewise linear dynamical systems. Linear Algebra Appl. 351–352, 147–184 (2002)
Nagurney, A., Zhang, D.: Projected Dynamical Systems and Variational Inequalities with Applications. Springer, New York (1995)
Heemels, W.P.M.H., Schumacher, J.M., Weiland, S.: Projected dynamical systems in a complementarity framework. Oper. Res. Lett. 27, 83–91 (2000)
Schumacher, J.M.: Complementarity systems in optimization. Math. Program. Ser. B 101, 263–295 (2004)
Stewart, D.E.: Dynamics with Inequalities. Impacts and Hard Constraints. SIAM, Philadelphia (2011)
Camlibel, M.K., Iannelli, L., Vasca, F.: Passivity and complementarity. Math. Program. A 145, 531–563 (2014)
Bastien, J., Schatzman, M.: Numerical precision for differential inclusions with uniqueness. ESAIM Math. Model. Numer. Anal. 36, 427–460 (2002)
Bastien, J.: Convergence order of implicit Euler numerical scheme for maximal monotone differential inclusions. Zeitschrift für Angewandte Mathematik und Physik 64(4), 955–966 (2013)
Liberzon, M.R.: Essays on the absolute stability theory. Autom. Remote Control 67, 1610–1644 (2006)
Camlibel, M.K., Heemels, W.P.M.H., Schumacher, J.M.: On linear passive complementarity systems. Eur. J. Control 8(3), 220–237 (2002)
Rockafellar, R.T., Wets, J.B.: Variational Analysis, A Series of Comprehensive Studies in Mathematics, vol. 317. Springer, Berlin (1998)
Brogliato, B.: Absolute stability and the Lagrange–Dirichlet theorem with monotone multivalued mappings. Syst. Control Lett. 51, 343–353 (2004)
Brogliato, B., Goeleven, D.: Well-posedness, stability and invariance results for a class of multivalued Lur’e dynamical systems. Nonlinear Anal. Theory Methods Appl. 74(1), 195–212 (2011)
Brogliato, B., Goeleven, D.: Existence and uniqueness of solutions and stability of nonsmooth multivalued Lur’e dynamical systems. J. Convex Anal. 20(3), 881–900 (2013)
Löhne, A.: A characterization of maximal monotone operators. Set-valued Anal. 16, 693–700 (2008)
Trentelman, H.L., Stoorvogel, A.A., Hautus, M.L.J.: Control Theory for Linear Systems. Springer, London (2001)
Rockafellar, R.T.: On the maximal monotonicity of subdifferential mappings. Pac. J. Math. 33, 209–216 (1970)
Phelps, R.R.: Lectures on maximal monotone operators. Extr. Math. 12, 193–230 (1997)
Tellegen, B.D.H.: The gyrator, a new electric network element. Philips Res. Rep. 3, 81–101 (1948)
Rockafellar, R.T.: Convex Analysis. Princeton University Press, Princeton, New Jersey (1970)
Goeleven, D., Brogliato, B.: Stability and instability matrices for linear evolution variational inequalities. IEEE Trans. Autom. Control 49(4), 521–534 (2004)
Adly, S., Goeleven, D.: A stability theory for second-order nonsmooth dynamical systems with application to friction problems. Journal de Mathematiques Pures et Appliquees 83(1), 17–51 (2004)
Brogliato, B., Goeleven, D.: The Krakovskii-LaSalle invariance principle for a class of unilateral dynamical systems. Math. Control Signals Syst. 17(1), 57–76 (2005)
Khatri, C.G., Mitra, S.K.: Hermitian and nonnegative definite solutions of linear matrix equations. SIAM J. Appl. Math. 31, 579–585 (1976)
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution 4.0 International License (http://creativecommons.org/licenses/by/4.0/), which permits unrestricted use, distribution, and reproduction in any medium, provided you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made.
About this article
Cite this article
Camlibel, M.K., Schumacher, J.M. Linear passive systems and maximal monotone mappings. Math. Program. 157, 397–420 (2016). https://doi.org/10.1007/s10107-015-0945-7
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10107-015-0945-7