Abstract
This paper considers the decentralized online strongly convex optimization over a multi-agent network, where the objective is to minimize a global loss function accumulated by the local loss functions of all agents. The Time-Varying Scaling Compression method is applied to deal with the communication bottleneck in the presence of disturbances. Then, by using the scaling compression, a decentralized online algorithm is proposed and the convergence results of the algorithm are analyzed. By choosing proper parameters, a sublinear regret can be obtained, which matches the same order as those of algorithms with no disturbances. Finally, numerical simulations are given to demonstrate the efficiency of the proposed algorithm.
Similar content being viewed by others
References
Bernstein, J., Wang, Y.X., Azizzadenesheli, K., Anandkumar, A.: SignSGD: Compressed optimisation for non-convex problems. In: International Conference on Machine Learning, pp. 560–569 (2018)
Cavalcante, R.L., Stanczak, S.: A distributed subgradient method for dynamic convex optimization problems under noisy information exchange. IEEE J. Select. Top. Signal Process. 7(2), 243–256 (2013)
Doan, T.T., Maguluri, S.T., Romberg, J.: Convergence rates of distributed gradient methods under random quantization: A stochastic approximation approach. IEEE Trans. Autom. Control 66(10), 4469–4484 (2020)
Du, B., Zhou, J., Sun, D.: Improving the convergence of distributed gradient descent via inexact average consensus. J. Optim. Theory Appl. 185, 504–521 (2020)
Gu, C., Jiang, L., Li, J., Wu, Z.: Privacy-preserving dual stochastic push-sum algorithm for distributed constrained optimization. J. Optim. Theory Appl. 197(1), 22–50 (2023)
Hazan, E., Agarwal, A., Kale, S.: Logarithmic regret algorithms for online convex optimization. Mach. Learn. 69, 169–192 (2007)
Hazan, E., et al.: Introduction to online convex optimization. Found. Trends Optim. 2(3–4), 157–325 (2016)
Hosseini, S., Chapman, A., Mesbahi, M.: Online distributed optimization via dual averaging. In: 52nd IEEE Conference on Decision and Control, pp. 1484–1489 (2013)
Hosseini, S., Chapman, A., Mesbahi, M.: Online distributed convex optimization on dynamic networks. IEEE Trans. Autom. Control 61(11), 3545–3550 (2016)
Jakovetić, D., Xavier, J., Moura, J.M.: Fast distributed gradient methods. IEEE Trans. Autom. Control 59(5), 1131–1146 (2014)
Koloskova, A., Stich, S., Jaggi, M.: Decentralized stochastic optimization and gossip algorithms with compressed communication. In: International Conference on Machine Learning, pp. 3478–3487 (2019)
Li, G., Liu, J., Lu, X., Zhao, P., Shen, Y., Niyato, D.: Decentralized online learning with compressed communication for near-sensor data analytics. IEEE Commun. Lett. 25(9), 2958–2962 (2021)
Liao, Y., Li, Z., Huang, K., Pu, S.: A compressed gradient tracking method for decentralized optimization with linear convergence. IEEE Trans. Autom. Control 67(10), 5622–5629 (2022)
Mateos, G., Bazerque, J.A., Giannakis, G.B.: Distributed sparse linear regression. IEEE Trans. Signal Process. 58(10), 5262–5276 (2010)
Michelusi, N., Scutari, G., Lee, C.S.: Finite-bit quantization for distributed algorithms with linear convergence. IEEE Trans. Inf. Theory 68(11), 7254–7280 (2022)
Molzahn, D.K., Dörfler, F., Sandberg, H., Low, S.H., Chakrabarti, S., Baldick, R., Lavaei, J.: A survey of distributed optimization and control algorithms for electric power systems. IEEE Trans. Smart Grid 8(6), 2941–2962 (2017)
Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)
Ni, W., Wang, X.: A multi-scale method for distributed convex optimization with constraints. J. Optim. Theory Appl. 192, 379–400 (2022)
Pu, S.: A robust gradient tracking method for distributed optimization over directed networks. In: 2020 59th IEEE Conference on Decision and Control, pp. 2335–2341 (2020)
Shi, Q., He, C., Chen, H., Jiang, L.: Distributed wireless sensor network localization via sequential greedy optimization algorithm. IEEE Trans. Signal Process. 58(6), 3328–3340 (2010)
Shi, Z., Zhou, C.: An improved distributed gradient-push algorithm for bandwidth resource allocation over wireless local area network. J. Optim. Theory Appl. 183, 1153–1176 (2019)
Simonetto, A., Jamali-Rad, H.: Primal recovery from consensus-based dual decomposition for distributed convex optimization. J. Optim. Theory Appl. 168, 172–197 (2016)
Singh, N., Data, D., George, J., Diggavi, S.: SPARQ-SGD: Event-triggered and compressed communication in decentralized optimization. IEEE Trans. Autom. Control 68(2), 721–736 (2022)
Srivastava, K., Nedic, A.: Distributed asynchronous constrained stochastic optimization. IEEE J. Select. Topics Signal Process. 5(4), 772–790 (2011)
Stich, S.U., Cordonnier, J.B., Jaggi, M.: Sparsified SGD with memory. Adv. Neural Inf. Process. Syst. 31, 4447–4458 (2018)
Tang, H., Gan, S., Zhang, C., Zhang, T., Liu, J.: Communication compression for decentralized training. Adv. Neural Inf. Process. Syst. 31, 7652–7662 (2018)
Tu, Z., Wang, X., Hong, Y., Wang, L., Yuan, D., Shi, G.: Distributed online convex optimization with compressed communication. Adv. Neural Inf. Process. Syst. 35, 34492–34504 (2022)
Wang, Y., Başar, T.: Gradient-tracking based distributed optimization with guaranteed optimality under noisy information sharing. IEEE Trans. Autom. Control 68(8), 4796–4811 (2023)
Xiong, Y., Wu, L., You, K., Xie, L.: Quantized distributed gradient tracking algorithm with linear convergence in directed networks. IEEE Trans. Autom. Control 68(9), 5638–5645 (2023)
Yi, X., Zhang, S., Yang, T., Chai, T., Johansson, K.H.: Communication compression for distributed nonconvex optimization. IEEE Trans. Autom. Control 68(9), 5477–5492 (2023)
Yuan, D., Ho, D.W., Hong, Y.: On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints. SIAM J. Control. Optim. 54(5), 2872–2892 (2016)
Yuan, D., Hong, Y., Ho, D.W., Jiang, G.: Optimal distributed stochastic mirror descent for strongly convex optimization. Automatica 90, 196–203 (2018)
Yuan, D., Xu, S., Zhao, H.: Distributed primal–dual subgradient method for multiagent optimization via consensus algorithms. IEEE Trans. Syst., Man, and Cybernet., Part B (Cybernetics) 41(6), 1715–1724 (2011)
Zhang, J., You, K., Xie, L.: Innovation compression for communication-efficient distributed optimization with linear convergence. IEEE Trans. Autom. Control 68(11), 6899–6906 (2023)
Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Proceedings of the 20th International Conference on Machine Learning, pp. 928–936 (2003)
Acknowledgements
This work was supported by National Natural Science Foundation of China (Grant Nos. 62273181, 62373190 and 62221004), and Postgraduate Research & Practice Innovation Program of Jiangsu Province (Grant No. KYCX24_0672).
Author information
Authors and Affiliations
Corresponding author
Additional information
Communicated by Olivier Fercoq.
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Rights and permissions
Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.
About this article
Cite this article
Liu, H., Yuan, D. & Zhang, B. Decentralized Online Strongly Convex Optimization with General Compressors and Random Disturbances. J Optim Theory Appl 204, 6 (2025). https://doi.org/10.1007/s10957-024-02595-z
Received:
Accepted:
Published:
DOI: https://doi.org/10.1007/s10957-024-02595-z