Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

Decentralized Online Strongly Convex Optimization with General Compressors and Random Disturbances

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

This paper considers the decentralized online strongly convex optimization over a multi-agent network, where the objective is to minimize a global loss function accumulated by the local loss functions of all agents. The Time-Varying Scaling Compression method is applied to deal with the communication bottleneck in the presence of disturbances. Then, by using the scaling compression, a decentralized online algorithm is proposed and the convergence results of the algorithm are analyzed. By choosing proper parameters, a sublinear regret can be obtained, which matches the same order as those of algorithms with no disturbances. Finally, numerical simulations are given to demonstrate the efficiency of the proposed algorithm.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Algorithm 1
Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5

Similar content being viewed by others

References

  1. Bernstein, J., Wang, Y.X., Azizzadenesheli, K., Anandkumar, A.: SignSGD: Compressed optimisation for non-convex problems. In: International Conference on Machine Learning, pp. 560–569 (2018)

  2. Cavalcante, R.L., Stanczak, S.: A distributed subgradient method for dynamic convex optimization problems under noisy information exchange. IEEE J. Select. Top. Signal Process. 7(2), 243–256 (2013)

    Article  MATH  Google Scholar 

  3. Doan, T.T., Maguluri, S.T., Romberg, J.: Convergence rates of distributed gradient methods under random quantization: A stochastic approximation approach. IEEE Trans. Autom. Control 66(10), 4469–4484 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  4. Du, B., Zhou, J., Sun, D.: Improving the convergence of distributed gradient descent via inexact average consensus. J. Optim. Theory Appl. 185, 504–521 (2020)

    Article  MathSciNet  MATH  Google Scholar 

  5. Gu, C., Jiang, L., Li, J., Wu, Z.: Privacy-preserving dual stochastic push-sum algorithm for distributed constrained optimization. J. Optim. Theory Appl. 197(1), 22–50 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  6. Hazan, E., Agarwal, A., Kale, S.: Logarithmic regret algorithms for online convex optimization. Mach. Learn. 69, 169–192 (2007)

    Article  MATH  Google Scholar 

  7. Hazan, E., et al.: Introduction to online convex optimization. Found. Trends Optim. 2(3–4), 157–325 (2016)

    Article  MATH  Google Scholar 

  8. Hosseini, S., Chapman, A., Mesbahi, M.: Online distributed optimization via dual averaging. In: 52nd IEEE Conference on Decision and Control, pp. 1484–1489 (2013)

  9. Hosseini, S., Chapman, A., Mesbahi, M.: Online distributed convex optimization on dynamic networks. IEEE Trans. Autom. Control 61(11), 3545–3550 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  10. Jakovetić, D., Xavier, J., Moura, J.M.: Fast distributed gradient methods. IEEE Trans. Autom. Control 59(5), 1131–1146 (2014)

    Article  MathSciNet  MATH  Google Scholar 

  11. Koloskova, A., Stich, S., Jaggi, M.: Decentralized stochastic optimization and gossip algorithms with compressed communication. In: International Conference on Machine Learning, pp. 3478–3487 (2019)

  12. Li, G., Liu, J., Lu, X., Zhao, P., Shen, Y., Niyato, D.: Decentralized online learning with compressed communication for near-sensor data analytics. IEEE Commun. Lett. 25(9), 2958–2962 (2021)

    Article  MATH  Google Scholar 

  13. Liao, Y., Li, Z., Huang, K., Pu, S.: A compressed gradient tracking method for decentralized optimization with linear convergence. IEEE Trans. Autom. Control 67(10), 5622–5629 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  14. Mateos, G., Bazerque, J.A., Giannakis, G.B.: Distributed sparse linear regression. IEEE Trans. Signal Process. 58(10), 5262–5276 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  15. Michelusi, N., Scutari, G., Lee, C.S.: Finite-bit quantization for distributed algorithms with linear convergence. IEEE Trans. Inf. Theory 68(11), 7254–7280 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  16. Molzahn, D.K., Dörfler, F., Sandberg, H., Low, S.H., Chakrabarti, S., Baldick, R., Lavaei, J.: A survey of distributed optimization and control algorithms for electric power systems. IEEE Trans. Smart Grid 8(6), 2941–2962 (2017)

    Article  MATH  Google Scholar 

  17. Nedic, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54(1), 48–61 (2009)

    Article  MathSciNet  MATH  Google Scholar 

  18. Ni, W., Wang, X.: A multi-scale method for distributed convex optimization with constraints. J. Optim. Theory Appl. 192, 379–400 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  19. Pu, S.: A robust gradient tracking method for distributed optimization over directed networks. In: 2020 59th IEEE Conference on Decision and Control, pp. 2335–2341 (2020)

  20. Shi, Q., He, C., Chen, H., Jiang, L.: Distributed wireless sensor network localization via sequential greedy optimization algorithm. IEEE Trans. Signal Process. 58(6), 3328–3340 (2010)

    Article  MathSciNet  MATH  Google Scholar 

  21. Shi, Z., Zhou, C.: An improved distributed gradient-push algorithm for bandwidth resource allocation over wireless local area network. J. Optim. Theory Appl. 183, 1153–1176 (2019)

    Article  MathSciNet  MATH  Google Scholar 

  22. Simonetto, A., Jamali-Rad, H.: Primal recovery from consensus-based dual decomposition for distributed convex optimization. J. Optim. Theory Appl. 168, 172–197 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  23. Singh, N., Data, D., George, J., Diggavi, S.: SPARQ-SGD: Event-triggered and compressed communication in decentralized optimization. IEEE Trans. Autom. Control 68(2), 721–736 (2022)

    Article  MathSciNet  MATH  Google Scholar 

  24. Srivastava, K., Nedic, A.: Distributed asynchronous constrained stochastic optimization. IEEE J. Select. Topics Signal Process. 5(4), 772–790 (2011)

    Article  MATH  Google Scholar 

  25. Stich, S.U., Cordonnier, J.B., Jaggi, M.: Sparsified SGD with memory. Adv. Neural Inf. Process. Syst. 31, 4447–4458 (2018)

    MATH  Google Scholar 

  26. Tang, H., Gan, S., Zhang, C., Zhang, T., Liu, J.: Communication compression for decentralized training. Adv. Neural Inf. Process. Syst. 31, 7652–7662 (2018)

    MATH  Google Scholar 

  27. Tu, Z., Wang, X., Hong, Y., Wang, L., Yuan, D., Shi, G.: Distributed online convex optimization with compressed communication. Adv. Neural Inf. Process. Syst. 35, 34492–34504 (2022)

    MATH  Google Scholar 

  28. Wang, Y., Başar, T.: Gradient-tracking based distributed optimization with guaranteed optimality under noisy information sharing. IEEE Trans. Autom. Control 68(8), 4796–4811 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  29. Xiong, Y., Wu, L., You, K., Xie, L.: Quantized distributed gradient tracking algorithm with linear convergence in directed networks. IEEE Trans. Autom. Control 68(9), 5638–5645 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  30. Yi, X., Zhang, S., Yang, T., Chai, T., Johansson, K.H.: Communication compression for distributed nonconvex optimization. IEEE Trans. Autom. Control 68(9), 5477–5492 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  31. Yuan, D., Ho, D.W., Hong, Y.: On convergence rate of distributed stochastic gradient algorithm for convex optimization with inequality constraints. SIAM J. Control. Optim. 54(5), 2872–2892 (2016)

    Article  MathSciNet  MATH  Google Scholar 

  32. Yuan, D., Hong, Y., Ho, D.W., Jiang, G.: Optimal distributed stochastic mirror descent for strongly convex optimization. Automatica 90, 196–203 (2018)

    Article  MathSciNet  MATH  Google Scholar 

  33. Yuan, D., Xu, S., Zhao, H.: Distributed primal–dual subgradient method for multiagent optimization via consensus algorithms. IEEE Trans. Syst., Man, and Cybernet., Part B (Cybernetics) 41(6), 1715–1724 (2011)

  34. Zhang, J., You, K., Xie, L.: Innovation compression for communication-efficient distributed optimization with linear convergence. IEEE Trans. Autom. Control 68(11), 6899–6906 (2023)

    Article  MathSciNet  MATH  Google Scholar 

  35. Zinkevich, M.: Online convex programming and generalized infinitesimal gradient ascent. In: Proceedings of the 20th International Conference on Machine Learning, pp. 928–936 (2003)

Download references

Acknowledgements

This work was supported by National Natural Science Foundation of China (Grant Nos. 62273181, 62373190 and 62221004), and Postgraduate Research & Practice Innovation Program of Jiangsu Province (Grant No. KYCX24_0672).

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Baoyong Zhang.

Additional information

Communicated by Olivier Fercoq.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Springer Nature or its licensor (e.g. a society or other partner) holds exclusive rights to this article under a publishing agreement with the author(s) or other rightsholder(s); author self-archiving of the accepted manuscript version of this article is solely governed by the terms of such publishing agreement and applicable law.

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liu, H., Yuan, D. & Zhang, B. Decentralized Online Strongly Convex Optimization with General Compressors and Random Disturbances. J Optim Theory Appl 204, 6 (2025). https://doi.org/10.1007/s10957-024-02595-z

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s10957-024-02595-z

Keywords