Abstract
In this paper, we present a continuous-time algorithm with a dynamic event-triggered communication (DETC) mechanism for solving a class of distributed convex optimization problems that satisfy a metric subregularity condition. The proposed algorithm addresses the challenge of limited bandwidth in multi-agent systems by utilizing a continuous-time optimization approach with DETC. Furthermore, we prove that the distributed event-triggered algorithm converges exponentially to the optimal set, even without strong convexity conditions. Finally, we provide a comparison example to demonstrate the efficiency of our algorithm in communication resource-saving.
Similar content being viewed by others
Explore related subjects
Discover the latest articles, news and stories from top researchers in related subjects.Avoid common mistakes on your manuscript.
1 Introduction
In recent decades, network systems have become increasingly significant in numerous practical fields, including multi-robot [1], reinforcement learning [2], power system [3], and smart manufacturing [4]. Many tasks over network systems can be formulated as distributed optimization problems, which involve searching for the optimal solution of aggregating all the nodes’ local objective functions. Various algorithms, such as continuous-time version [5–9] and discrete-time version [10–15], have been proposed to solve distributed optimization problems.
The convergence rate of optimization algorithms is a well-recognized topic in optimization theory and the corresponding algorithm design. The rate of the first-order gradient-based algorithm is generally expressed as \(O(1/k)\) [16] or \(O(1/k^{2})\) (with an accelerated strategy) [17] for optimization problems with general convex and Lipschitz continuous objective functions. However, to attain a faster convergence performance (such as linear convergence), the objective function must satisfy the condition of strong convexity [18]. Despite its theoretical appeal, strong convexity is not always a practical assumption in various applications. To address this issue, several methods have been developed to relax the strict assumption, including metric subregularity [19], restricted secant inequality [20], and Polyak-Łojasiewicz (PL) conditions [21]. For smooth constrained optimization problems, [22] proposed a relaxation of the strong convexity conditions as quasi-strong convexity, quadratic under-approximation, quadratic gradient growth, and quadratic functional growth. These relaxed conditions have been shown to enable linear convergence for several first-order methods.
Effective communication resource management has always been vital for evaluating distributed algorithms, as communication is energy-intensive and resources are often constrained. To alleviate the communication burden, researchers have proposed using event-triggered communication. Several studies have demonstrated that event-triggered communication is highly effective in reducing communication costs, including works by [23–33]. For optimization problems involving smooth cost functions, [34] developed a communication-efficient event-triggered first-order primal-dual algorithm, achieving an \(O(1/k)\) convergence rate. [11] combined the event-triggered mechanism with the mirror descent algorithm for distributed stochastic optimization problems. More recently, researchers have proposed a more advanced theory called the dynamic event-triggered communication (DETC) mechanisms, which employ additional internal dynamic variables. This mechanism has better communication resource saving than static event-triggered communication. For undirected and connected graphs, [35] proposed a distributed zero-gradient-sum algorithm that uses dynamic event-triggered communication, which converges exponentially to the global minimizer. In addition, [36] proposed a fully distributed algorithm with a dynamic event-triggered communication mechanism for the second-order continuous-time multi-agent systems, excluded the Zeno behavior, and showed an exponential convergence.
Given the non-strong convexity of the metric subregularity for linear convergence and the potential of the event-triggered communication mechanism to save communication resources, we solve the distributed optimization problem with a dynamic event-triggered communication mechanism and provide the corresponding convergence performance. Correspondingly, the main contributions of this work are summarized as follows.
-
1)
We design a distributed dynamic event-triggered primal-dual algorithm for solving distributed optimization problems with metric subregularity. Compared with the results in [19], the proposed algorithm significantly reduces the communication burdens but does not sacrifice the convergence rate.
-
2)
The designed algorithm linearly converges to the optimum set with an explicit convergence rate, which is faster than the asymptotic convergence in [37] with strong convexity. Moreover, compared with the restricted strong convexity in [38], we relax the condition as metric subregularity and achieve the same convergence rate of [38].
-
3)
Compared to the seminal dynamic event-triggered mechanism, which needs the exchanging of the consensus error \(x_{j}({t_{k_{j}(t)}^{j}})-x_{i}({t_{k_{i}(t)}^{i}})\) at the triggered time, for distributed optimization in [35], our method is more simple by removing the term \(x_{j}({t_{k_{j}(t)}^{j}})-x_{i}({t_{k_{i}(t)}^{i}})\) and we provide a different method for convergence analysis.
The rest of the paper is organized as follows. Sect. 2 gives some basic knowledge of metric subregularity and graph theory. Sect. 3 introduces the distributed optimization problem and algorithm design. Sect. 4 analyzes the performance of the proposed algorithm and Sect. 5 gives a comparison example and Sect. 6 ends this paper with some concluding remarks.
2 Preliminaries
2.1 Notation
\({A}^{\top}\) is the transpose of A. \({\mathbb{R}}\), \({\mathbb{R}}^{n}\), and \({\mathbb{R}}^{m \times n}\) denote the sets of real numbers, n-dimensional vector, and \(m \times n\) real matrices, respectively. \(\| \cdot \|\) denotes the \(l_{2}\)-norm of a vector or the induced dual norm of a matrix. \(l_{ij}\) is the element at the j-th column and i-th row of L. \({\mathbf{1}}_{n} \in {\mathbb{R}}^{n}\) stands for a column vector of n dimensions with all one. \(\operatorname{col}\{z_{i}\}_{i=1}^{N}=[z_{1}^{\top}, z_{2}^{\top}, \ldots , z_{n}^{ \top}]^{\top}\) (\(\operatorname{col}\{z_{1}, z_{2}\}\)) denotes an augmented vector stacked by vectors \(z_{1}, z_{2}, \ldots , z_{n}\) (\(z_{1}\) and \(z_{2}\)). \(A \otimes B\) is the Kronecker product of the two matrices. \(d(x, {\mathcal {X}})\) is the distance between the point x and set \({\mathcal {X}}\).
2.2 Metric subregularity
Consider a map \(H: {\mathbb{R}}^{n}\rightarrow {\mathbb{R}}^{n}\) and define
By the above map \(H: {\mathbb{R}}^{n}\rightarrow {\mathbb{R}}^{n}\), we give the definition of κ-metrically subregular.
Definition 1
(see [19])
For a map \(H: {\mathbb{R}}^{n}\rightarrow {\mathbb{R}}^{n}\) with \(({x}_{o},{y}_{o})\in \operatorname{gph}H\), if there exist \(\kappa >0\) and a neighborhood \({\mathcal {D}}\) of \(x_{o}\) such that
then H is κ-metrically subregular at \(({x}_{o},{y}_{o})\).
2.3 Comparison lemma
For the convenience of the convergence analysis hereinafter, we provide the following comparison lemma on nonlinear systems.
Lemma 1
(see [39])
Consider the following scalar differential equation
where \(f(t,u)\) is continuous in t and locally Lipschitz in u, for all \(t\ge 0\) and all \(u \in J \subset R\). Let \([t_{0}, T)\) (T may be infinite) be the maximal interval of solution existence \(u(t)\), and suppose \(u(t) \in J\) for all \(t \in [t_{0}, T)\). Let \(v(t)\) be a continuous function with an upper right-hand derivative \(D^{+} v(t)\) satisfying the differential inequality
where \(v(t) \in J\) for all \(t \in [t_{0}, T)\). Then, \(v(t) \leq u(t)\) for all \(t \in [t_{0}, T)\).
2.4 Graph theory
For an undirected graph \(\mathcal{G}=(\mathcal{V},\mathcal{E})\), \(\mathcal{E} \subseteq \mathcal{V\times V}\) and \(\mathcal{V}=\{1,2,\ldots ,n\}\) are the sets of edges and nodes, respectively. \((i,j)\in {\mathcal {E}}\) means that node i exchanges information with node j. \({A = [a_{ij}] \in \mathbb{R}^{n\times n}}\) is the adjacency matrix with \(a_{ij} = a_{ji} > 0\) if \(\{j, i\} \in \mathcal{E}\) and \(a_{ij} = 0\) otherwise. Defining \(D =\operatorname{diag}\{d_{1}, \ldots , d_{n}\}\) with \(d_{i} = \sum_{i=1}^{n}a_{ji}\), \(i \in \mathcal{V}\), \(L=D-A\) is the Laplacian matrix of \({\mathcal {G}}\). Specifically, if \(\mathcal{G}\) is directed and connected, then \(L = L^{\top} \ge 0\).
3 Problem formulation and algorithm design
In this section, we consider a distributed convex optimization problem and propose an event-triggered primal-dual algorithm to solve this problem.
3.1 Problem formulation
Consider the distributed optimization problem over networks as follows
where \(\psi _{i}(\cdot ): \mathbb{R}^{n} \rightarrow \mathbb{R}\) and \({\boldsymbol {x}}=\operatorname{col}\{x_{i}\}_{i=1}^{N} \in {\mathbb{R}}^{Nn}\). In the meantime, we take the following mild assumption.
Assumption 1
The local functions \(\psi _{i}(\cdot )\), \(i \in \mathcal{V}\) are convex and differentiable, and the corresponding gradients are local Lipschitz continuous.
To solve the above optimization problem in a distributed manner, a multi-agent system on communication graph \({\mathcal {G}}\) must be employed. In that way, each agent updates its local variable \(x_{i}\) using local data and communication to reach the optimal solution over \({\mathcal {G}}\). We take the following mild assumption for \({\mathcal {G}}\).
Assumption 2
The graph \({\mathcal {G}}\) is connected and undirected.
Note that Assumption 2 implies that the Laplacian matrix L is symmetric and positive semi-definite.
3.2 Algorithm design
We propose the following event-triggered-based algorithm from the primal-dual perspective to solve the above problem.
In Algorithm 1, \(\eta _{i}(t)\) is an internal dynamic variable which satisfies
with \(\beta >0 \) and \(\eta _{i}(0) \in {\mathbb{R}}_{0}^{+}\).
Remark 1
Algorithm 1 is introduced from the seminal work of [19] but with the event-triggered mechanism. Compared with event-triggered based algorithms proposed in [11, 28, 29, 34–36, 38], it relaxes of strong convexity as κ-metrically subregular. Furthermore, the proposed method modifies the static event-triggered communication scheme into a dynamic counterpart, resulting in a communication scheme that is more efficient compared to the approach presented in [11, 28, 34, 38].
Remark 2
The event-triggered communication (ETC) in [11, 28, 34, 38] is denoted as static ETC, as it exclusively involves the current values of local variables. In contrast, the ETC in Algorithm 1 is termed dynamic ETC, as it incorporates an additional internal dynamic variable \(\eta _{i}\). The main idea behind the dynamic event-triggered communication scheme involves introducing an internal nonnegative dynamic variable \(\eta _{i}\) based on the static event-triggered communication scheme. This modification ensures that \(\theta \|x_{i}(t)-x_{i}(t_{k}^{i})\| - a_{i}\mathrm{e}^{-b_{i}t}\) and \(\theta \|q_{i}(t)-q_{i}(t_{k}^{i})\| - c_{i}\mathrm{e}^{-d_{i}t}\) do not have to be strictly non-negative at all times but rather non-negative on average. Furthermore, in the limit where \(\theta \rightarrow +\infty \), the static ETC can be considered a special case of the dynamic ETC.
For the designed DETC, we have the following basic conclusion.
Lemma 2
Let x, q, and η be generated by the proposed algorithm in (2)-(4). Then, for all \(t \in [0,t_{\infty})\), \(\eta _{i}(t)+\theta \|x_{i}(t)-x_{i}(t_{k}^{i})\| - a_{i}{\mathrm{e}}^{-b_{i}t} \geq 0 \) and \(\eta _{i}(t)\geq 0 \).
Proof
By (3), the communication condition triggers when
According to (3) and (4),
where \(a_{0}=\min\{a_{i}, c_{i}\}_{i=1}^{N}\), \(b_{0}=\max\{b_{i}, d_{i}\}_{i=1}^{N}\). Then
where \(C_{1}=-\frac{a_{0}}{1+\theta (\beta -b_{0})}\) and \(C_{1}=-C_{2}\). □
Remark 3
The value of \(\eta _{i}\) depends on the values of \(\frac{(1+\theta \beta )}{\theta}\) and \(b_{0}\). If \(b_{0}> \frac{(1+\theta \beta )}{\theta}\), then \(C_{1}>0\) and yields \(\eta _{i}(t)>0\), otherwise \(C_{2}>0\) and achieves \(\eta _{i}(t)>0\) as well.
4 Main results
In this section, we present the linear convergence of the proposed algorithm with the aid of the Lyapunov theory. Define
We rewrite the algorithm in (2) into the following compact form
where \({H({\boldsymbol {z}}, {\hat{\boldsymbol {z}}})} = \operatorname{col}\{\nabla {\psi}({\boldsymbol {x}})+{{ \bar{L}}{\hat{\boldsymbol {x}}}+{\bar{L}}{\hat{\boldsymbol {q}}}}, -{{\bar{L}}{\hat{\boldsymbol {x}}}} \}\).
Define \([\boldsymbol{z}]^{\ast}\) as the projection from a point z to the optimal solution set \({\mathcal {Z}}^{\ast}= X^{\ast} \times Y^{\ast}\), then
Based on the optimization problem in (1) and the primal-dual event-triggered algorithm in (5), we take the following assumption, which is more general than the strongly convex and restrict strongly convex in [18, 38].
Assumption 3
H is \(k_{0}\)-metrically subregular at any point \(({z^{*}},0) \in \operatorname{gph}{H}\), namely, there exist a constant \(k_{0} > 0\) and \({\mathcal {Z}}^{\ast} \subset {\mathcal {D}}\) such that
For the proposed algorithm in (2), we design the following Lyapunov function
where \({\bar{L}}=L\otimes I_{N}\) and \(\sigma _{0}\) is the largest singular value of L.
Lemma 3
Under Assumptions 1-3, let \(({\boldsymbol {x}},{\boldsymbol {q}})\) be generated by the proposed algorithm in (2). Then, the Lyapunov function can be bounded as follows
where \(l_{0}\) is the Lipschitz constant of \(\nabla \psi (\cdot )\).
Proof
According to the optimal conditions \(\nabla {\psi}([\boldsymbol{x}]^{\ast}) =-{\bar{L}}\boldsymbol{[{q}]}^{\ast}\) and \([\boldsymbol{x}^{\ast}]^{\top} {\bar{L}} =0\), we have
Then we rewrite \(V_{2}({z})\) as
The convexity of ψ and \({{\bar{L}}}\succeq 0\) imply,
Therefore,
and
According to (10) and (11), the lower bound of \(V({\boldsymbol {z}})\) is
By Lemma 1 and the \(l_{0}\)-Lipchitz continuity of \(\nabla {\psi} (\cdot )\), we have
Moreover,
for any \(\varepsilon > 0 \). By \(\varepsilon = \frac{\sigma _{0}}{l_{0}+\sigma _{0}}\), we have
The upper bound is
Therefore, we obtain the bounds of \(V({\boldsymbol {z}})\). □
Theorem 1
Under Assumptions 1-3, let \(({\boldsymbol {x}},{\boldsymbol {q}})\) be generated by the proposed algorithm in (2). Then \({{\boldsymbol {z}}}(t)\) linearly converges to the optimal set \({\mathcal {Z}}^{\ast}\) as follows
where \(C_{3}={ \frac{2\gamma _{3}Nk_{0}^{2}(2l_{0}+13\sigma _{0})}{bk_{0}^{2}(2l_{0}+13\sigma _{0})-6}}\), \(C_{4}=-C_{3}\), and \(T_{0}<\infty \).
Proof
For the designed Lyapunov function in (7), we calculate its first-order derivatives with respect to t,
where the first equation follows from \(\boldsymbol{e}^{x} = \boldsymbol{\hat{x}-x}\) and \(\boldsymbol{e}^{q} = \boldsymbol{\hat{q}-q}\), the second equation is deduced based on \({\bar{L}}\boldsymbol{[q]}^{\ast}=-\nabla \psi (\boldsymbol{[x]}^{\ast})\), the first inequality is deduced based on Young’s inequality and Cauchy-Schwarz inequality, and the last inequality is deduced by \(b=\min \{b_{1}, \ldots , b_{N},d_{1}, \ldots , d_{N} \} \geq 0\). Define
By (3), we have
From (16),
where the first inequality follows from (17) and the second inequality is deduced based on \(-\eta _{i}^{2}+\eta _{i}a_{i} {\mathrm{e}}^{-b_{i} t}\le a_{i}^{2} { \mathrm{e}}^{-2b_{i} t}\le a^{2} {\mathrm{e}}^{-2b t}\). By (8),
Based on (15), (16), and (18),
where the last inequality holds under \({{2A\beta -\frac{4\sigma _{0}}{\theta ^{2}}- \frac{\sigma _{0}^{2}}{b\theta ^{2}}}}>0\). Therefore,
where \(\gamma _{1}=V_{0}({\boldsymbol {z}},\eta )+({{ \frac{4\sigma _{0}}{3b\theta ^{2}}}+ \frac{\sigma _{0}^{2}}{3b\theta ^{2}}+\frac{A}{6b\theta}})N a^{2}\). By (18), we have
According to (15), (16) and (21),
where \(\gamma _{2}=a(\frac{4a}{\theta ^{2}}+ \frac{\sqrt{ 2\gamma _{1}}}{\theta}+\frac{Aa}{2\theta \sigma _{0}}) \sigma _{0}\). Moreover,
where the second equation follows from \(\boldsymbol{e}^{x} = \boldsymbol{\hat{x}-x}\) and \(\boldsymbol{e}^{q} = \boldsymbol{\hat{q}-q}\). Due to \(-a^{\top}b \leq \frac{c}{4}\|a\|^{2}+\frac{1}{c}\|b\|^{2}\), we have
and
Substituting (24) and (25) into (23), we have
where the last inequality follows from (16). As a result,
where \(\gamma _{3}=6\sigma _{0}\gamma _{2}+( \frac{12\sigma _{0}^{2}}{\theta ^{2}}+\frac{2B}{3\theta}){\mathrm{e}}^{- \frac{3}{2}bt}\). Based on (6), (13) and (26),
where \(\beta >{{\frac{3}{2k_{0}^{2}(2l_{0}+13\sigma _{0})}+ \frac{18}{\theta ^{2}(6\sigma _{0}A+\frac{4}{3}B)}}}\). Defining
By the Laplace transform of (28), we have
Then, there exists a finite constant \(T_{0}\) such that
where
According to Lemma 1, we have
Consequently, \({{\boldsymbol {z}}}(t)\) linearly converges to \({\mathcal {Z}}^{\ast}\). □
Remark 4
By Theorem 1, the convergence rate depends on the values of \(\frac{3}{k_{0}^{2}(2\eta _{0}+13\sigma _{0})}\) and \(\frac{b}{2}\), where the optimization problem determines the first one and the other one depends on the event-triggered strategy. If \(\frac{b}{2}> \frac{3}{k_{0}^{2}(2\eta _{0}+13\sigma _{0})}\), then \(C_{3}>0\) and it yields \(\|{\boldsymbol {x}}(t) -[{\boldsymbol {x}}]^{\ast} \| \le O({\mathrm{e}}^{- \frac{3}{2k_{0}^{2}(2\eta _{0}+13\sigma _{0})}})\), otherwise \(C_{4}>0\) and we have \(\|{\boldsymbol {x}} (t)-[{\boldsymbol {x}}]^{\ast} \| \le O({\mathrm{e}}^{-\frac{b}{4}})\).
Remark 5
For continuous-time algorithms with event-triggered communication, Zeno behavior is an important topic. In this paper, Zeno Behavior is precluded in the algorithm (2) and the event-triggered scheme (3) is designed by referring to the work of [38]. The analysis of the Zeno behavior of the primal-dual algorithm is similar to Theorem 3 of [38] and we omit it in this paper.
5 Simulation
To illustrate the effectiveness of the proposed method, we take the same example in [19], which considers a network containing 12 agents connected by a ring topology and equipped with the following local objective functions
where \(C_{5}^{i}\), \(C_{6}^{i}\), \(r_{i}\), and \(s_{i} > 0\) are given in Table 1. Then \(\psi _{i}(x)\) is convex and differentiable with
For the designed event-triggered scheme, we randomly select \(a_{i}\) and \(b_{i} \) in the triggering mechanism (10) and (15). We provide the trajectories of \(x_{i}(t)\) and \(q_{i}(t)\) of agent 1 generated by the algorithm of [19] and the proposed algorithm with DETC in Fig. 1. Moreover, we show the relative errors \(J(k)={\|x(k)-1_{N}\otimes x^{\ast}\|^{2}}/{\|x(1)-1_{N} \otimes x^{\ast}\|^{2}}\) of these algorithms in Fig. 2 and summarize the times of triggering to achieve \(J_{i}(k)={\|x_{i}(k)- x^{\ast}\|^{2}}/{\|x_{i}(1)-x^{\ast} \|^{2}}=0.01\) convergence accuracy in Table 2. Furthermore, we show the relative errors \(J(k)\) of Algorithm 1 when considering the event-triggered parameters b in Fig. 3. As we state in Remark 4, when \(b=\{0.5, 0.6\}\), which implies \(\frac{b}{2}> \frac{3}{k_{0}^{2}(2\eta _{0}+13\sigma _{0})}\), the convergence rate is determined by the optimization problem, then \(\|{\boldsymbol {x}}(t) -[{\boldsymbol {x}}]^{\ast} \| \le O({\mathrm{e}}^{- \frac{3}{2k_{0}^{2}(2\eta _{0}+13\sigma _{0})}})\), otherwise the convergence rate is determined by the event-triggered strategy, then \(\|{\boldsymbol {x}} (t)-[{\boldsymbol {x}}]^{\ast} \| \le O({\mathrm{e}}^{-\frac{b}{4}})\).
From Figs. 1-2 and Table 2, the DETC achieves almost the same convergence rate as the counterparts of the TTC in [19] while the DETC significantly reduces communication burdens than TTC (98.36% for achieving 0.01 convergence accuracy).
6 Conclusion
In this paper, we designed a distributed continuous-time primal-dual algorithm for the distributed optimization problem with the metric subregularity condition. To alleviate the resource burden caused by continuous communication, we proposed a dynamic event-triggered communication mechanism without Zeno behavior. Moreover, we proved that the proposed algorithm achieved a linear convergence rate. Finally, we illustrated the effectiveness of the proposed algorithm with a comparison example.
Data availability
Not applicable.
References
K. Cao, X. Li, L. Xie, Distributed framework matching. IEEE Trans. Robot. 39(1), 823–838 (2023)
X. Zhao, P. Yi, L. Li, Distributed policy evaluation via inexact ADMM in multi-agent reinforcement learning. Control Theory Technol. 18, 362–378 (2020)
P. Yi, Y. Hong, F. Liu, Initialization-free distributed algorithms for optimal resource allocation with feasibility constraints and application to economic dispatch of power systems. Automatica 74, 259–269 (2016)
A. Kusiak, Smart manufacturing must embrace big data. Nature 544(7648), 23–25 (2017)
X. Zeng, P. Yi, Y. Hong, Distributed continuous-time algorithm for constrained convex optimizations via nonsmooth analysis approach. IEEE Trans. Autom. Control 62(10), 5227–5233 (2017)
S. Liang, X. Zeng, Y. Hong, Distributed nonsmooth optimization with coupled inequality constraints via modified Lagrangian function. IEEE Trans. Autom. Control 63(6), 1753–1759 (2018)
P. Li, J. Hu, L. Qiu, Y. Zhao, B.K. Ghosh, A distributed economic dispatch strategy for power–water networks. IEEE Trans. Control Netw. Syst. 9(1), 356–366 (2022)
Y. Tang, P. Yi, Y. Zhang, D. Liu, Nash equilibrium seeking over directed graphs. Auton. Intell. Syst. 2(1), 79–86 (2022)
S. Liang, P. Yi, Y. Hong, K. Peng, Exponentially convergent distributed Nash equilibrium seeking for constrained aggregative games. Auton. Intell. Syst. 2(1), 71–78 (2022)
P. Yi, L. Li, Distributed nonsmooth convex optimization over Markovian switching random networks with two step-sizes. J. Syst. Sci. Complex. 34(4), 1324–1344 (2021)
M. Xiong, B. Zhang, D.W.C. Ho, D. Yuan, S. Xu, Event-triggered distributed stochastic mirror descent for convex optimization. IEEE Trans. Neural Netw. Learn. Syst. 34(9), 6480–6491 (2023)
S. Cheng, S. Liang, Y. Fan, Y. Hong, Distributed gradient tracking for unbalanced optimization with different constraint sets. IEEE Trans. Autom. Control 68(6), 3633–3640 (2023)
K. Fu, H.F. Chen, W. Zhao, Distributed dynamic stochastic approximation algorithm over time-varying networks. Auton. Intell. Syst. 1(1), 49–68 (2021)
Y. Wang, X. Zeng, W. Zhao, Y. Hong, A zeroth-order algorithm for distributed optimization with stochastic stripe observations. Sci. China Inf. Sci. 66(9), 199202 (2023)
Q. Huang, Y. Fan, S. Cheng, Distributed unbalanced optimization design over nonidentical constraints. IEEE Trans. Netw. Sci. Eng. (2024). https://doi.org/10.1109/TNSE.2024.3374765 (Early Access)
S.P. Boyd, L. Vandenberghe, Convex Optimization (Cambridge University Press, Cambridge, 2004)
Y.E. Nesterov, A method of solving a convex programming problem with convergence rate \({O}(1/k^{2})\). Sov. Math. Dokl. 27(2), 372–376 (1983)
W. Shi, Q. Ling, G. Wu, W. Yin, Extra: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)
S. Liang, L. Wang, G. Yin, Exponential convergence of distributed primal–dual convex optimization algorithm without strong convexity. Automatica 105, 298–306 (2019)
X. Yi, S. Zhang, T. Yang, T. Chai, K.H. Johansson, Exponential convergence for distributed optimization under the restricted secant inequality condition. IFAC-PapersOnLine 53(2), 2672–2677 (2020)
X. Yi, S. Zhang, T. Yang, T. Chai, K.H. Johansson, Linear convergence of first-and zeroth-order primal-dual algorithms for distributed nonconvex optimization. IEEE Trans. Autom. Control 67(8), 4194–4201 (2022)
I. Necoara, Y. Nesterov, F. Glineur, Linear convergence of first order methods for non-strongly convex optimization. Math. Program. 175, 69–107 (2019)
Y. Fan, G. Feng, Y. Wang, C. Song, Distributed event-triggered control of multi-agent systems with combinational measurements. Automatica 49(2), 671–675 (2013)
X. Zeng, Q. Hui, Energy-event-triggered hybrid supervisory control for cyber-physical network systems. IEEE Trans. Autom. Control 60(11), 3083–3088 (2015)
W. Hu, L. Liu, G. Feng, Event-triggered cooperative output regulation of linear multi-agent systems under jointly connected topologies. IEEE Trans. Autom. Control 64(3), 1317–1322 (2019)
G. Chen, D. Yao, Q. Zhou, H. Li, R. Lu, Distributed event-triggered formation control of usvs with prescribed performance. J. Syst. Sci. Complex. 35(3), 820–838 (2022)
Z. Peng, R. Luo, J. Hu, K. Shi, B.K. Ghosh, Distributed optimal tracking control of discrete-time multiagent systems via event-triggered reinforcement learning. IEEE Trans. Circuits Syst. I, Regul. Pap. 69(9), 3689–3700 (2022)
S. Cheng, H. Li, Y. Guo, T. Pan, Y. Fan, Event-triggered optimal nonlinear systems control based on state observer and neural network. J. Syst. Sci. Complex. 36(1), 222–238 (2023)
J. Liu, P. Yi, Predefined-time distributed Nash equilibrium seeking for noncooperative games with event-triggered communication. IEEE Trans. Circuits Syst. II, Express Briefs 70(9), 3434–3438 (2023)
D. Yao, H. Li, Y. Shi, SMO-based distributed tracking control for linear mass with event-triggering communication. IEEE Trans. Control Netw. Syst. (2023). https://doi.org/10.1109/TCNS.2023.3290424 (Early Access)
L. Liu, X. Zhao, B. Wang, Y. Wu, W. Xing, Event-triggered state estimation for cyber-physical systems with partially observed injection attacks. Sci. China Inf. Sci. 66, 169202 (2023)
X. Ren, W. Zhao, J. Gao, Adaptive regulation for Hammerstein and Wiener systems with event-triggered observations. J. Syst. Sci. Complex. 36(5), 1878–1904 (2023)
M. Li, S. Li, X. Luo, X. Zheng, X. Guan, Distributed periodic event-triggered terminal sliding mode control for vehicular platoon system. Sci. China Inf. Sci. 66(12), 229203 (2023)
C. Liu, H. Li, Y. Shi, D. Xu, Event-triggered broadcasting for distributed smooth optimization, in Proc. IEEE Conf. Decis. Control (IEEE, 2019), pp. 716–721
W. Du, X. Yi, J. George, K.H. Johansson, T. Yang, Distributed optimization with dynamic event-triggered mechanisms, in Proc. IEEE Conf. Decis. Control (IEEE, 2018), pp. 969–974
X. Yi, L. Yao, T. Yang, J. George, K.H. Johansson, Distributed optimization for second-order multi-agent systems with dynamic event-triggered communication, in Proc. IEEE Conf. Decis. Control (IEEE, 2018), pp. 3397–3402
Z. Li, Z. Wu, Z. Li, Z. Ding, Distributed optimal coordination for heterogeneous linear multiagent systems with event-triggered mechanisms. IEEE Trans. Autom. Control 65(4), 1763–1770 (2020)
T. Yang, L. Xu, X. Yi, S.J. Zhang, R.J. Chen, Y.Z. Li, Event-triggered distributed optimization algorithms. Acta Anat. Sin. 48(1), 133–143 (2022)
H.K. Khalil, Nonlinear Systems, 3rd edn. (Prentice Hall, Upper Saddle River, 2002)
Acknowledgements
The authors express their gratitude to the anonymous reviewers and potential users for providing valuable comments and suggestions.
Funding
This work was supported in part by the National Natural Science Foundation of China under Grant Nos. 62103003, 61973002, and in part by the Anhui Provincial Natural Science Foundation under Grant No. 2008085J32.
Author information
Authors and Affiliations
Contributions
The design and implementation of the research involved contributions from all authors. XY, XC, YF, and SC conducted material preparation and analysis. XY, XC, and SC were involved in problem formulation, discussion of ideas, mathematical derivation, and proof of results. Yuan Fan contributed to problem formulation and discussion of ideas. The final manuscript was reviewed and approved by all authors.
Corresponding author
Ethics declarations
Competing interests
The authors declare no competing interests.
Additional information
Publisher’s Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Yu, X., Chen, X., Fan, Y. et al. Distributed optimization via dynamic event-triggered scheme with metric subregularity condition. Auton. Intell. Syst. 4, 4 (2024). https://doi.org/10.1007/s43684-024-00063-z
Received:
Revised:
Accepted:
Published:
DOI: https://doi.org/10.1007/s43684-024-00063-z