Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Advertisement

On the Linear Convergence of Two Decentralized Algorithms

  • Published:
Journal of Optimization Theory and Applications Aims and scope Submit manuscript

Abstract

Decentralized algorithms solve multi-agent problems over a connected network, where the information can only be exchanged with the accessible neighbors. Though there exist several decentralized optimization algorithms, there are still gaps in convergence conditions and rates between decentralized and centralized algorithms. In this paper, we fill some gaps by considering two decentralized algorithms: EXTRA and NIDS. They both converge linearly with strongly convex objective functions. We will answer two questions regarding them. What are the optimal upper bounds for their stepsizes? Do decentralized algorithms require more properties on the functions for linear convergence than centralized ones? More specifically, we relax the required conditions for linear convergence of both algorithms. For EXTRA, we show that the stepsize is comparable to that of centralized algorithms. For NIDS, the upper bound of the stepsize is shown to be exactly the same as the centralized ones. In addition, we relax the requirement for the objective functions and the mixing matrices. We provide the linear convergence results for both algorithms under the weakest conditions.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

Notes

  1. The datasets generated during and/or analyzed during the current study are available from the corresponding author on reasonable request.

References

  1. Nedić, A., Ozdaglar, A.: Distributed subgradient methods for multi-agent optimization. IEEE Trans. Autom. Control 54, 48–61 (2009)

    Article  MathSciNet  Google Scholar 

  2. Ram, S.S., Nedić, A., Veeravalli, V.V.: Distributed stochastic subgradient projection algorithms for convex optimization. J. Optim. Theory Appl. 147(3), 516–545 (2010)

    Article  MathSciNet  Google Scholar 

  3. Nedic, A.: Asynchronous broadcast-based convex optimization over a network. IEEE Trans. Autom. Control 56(6), 1337–1351 (2010)

    Article  MathSciNet  Google Scholar 

  4. Yuan, K., Ling, Q., Yin, W.: On the convergence of decentralized gradient descent. SIAM J. Optim. 26(3), 1835–1854 (2016)

    Article  MathSciNet  Google Scholar 

  5. Jakovetic, D., Xavier, J., Moura, J.: Fast distributed gradient methods. IEEE Trans. Autom. Control 59, 1131–1146 (2014)

    Article  MathSciNet  Google Scholar 

  6. Shi, W., Ling, Q., Yuan, K., Wu, G., Yin, W.: On the linear convergence of the ADMM in decentralized consensus optimization. IEEE Trans. Signal Process. 62(7), 1750–1761 (2014)

    Article  MathSciNet  Google Scholar 

  7. Glowinski, R., Marroco, A.: Sur l’approximation, par éléments finis d’ordre un, et la résolution, par pénalisation-dualité d’une classe de problèmes de dirichlet non linéaires. ESAIM: Mathematical Modelling and Numerical Analysis-Modélisation Mathématique et Analyse Numérique 9(R2), 41–76 (1975)

  8. Boyd, S., Parikh, N., Chu, E., Peleato, B., Eckstein, J., et al.: Distributed optimization and statistical learning via the alternating direction method of multipliers. Mach. Learn. 3(1), 1–122 (2011)

    MATH  Google Scholar 

  9. Wei, E., Ozdaglar, A.: On the \({O}(1/k)\) convergence of asynchronous distributed alternating direction method of multipliers. In: Global Conference on Signal and Information Processing (GlobalSIP), pp. 551–554. IEEE (2013)

  10. Chang, T.H., Hong, M., Wang, X.: Multi-agent distributed optimization via inexact consensus ADMM. IEEE Trans. Signal Process. 63(2), 482–497 (2015)

    Article  MathSciNet  Google Scholar 

  11. Hong, M., Chang, T.H.: Stochastic proximal gradient consensus over random networks. IEEE Trans. Signal Process. 65(11), 2933–2948 (2017)

    Article  MathSciNet  Google Scholar 

  12. Shi, W., Ling, Q., Wu, G., Yin, W.: EXTRA: an exact first-order algorithm for decentralized consensus optimization. SIAM J. Optim. 25(2), 944–966 (2015)

    Article  MathSciNet  Google Scholar 

  13. Li, Z., Shi, W., Yan, M.: A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. IEEE Trans. Signal Process. 67(17), 4494–4506 (2019)

    Article  MathSciNet  Google Scholar 

  14. Nedić, A., Olshevsky, A., Shi, W.: Achieving geometric convergence for distributed optimization over time-varying graphs. SIAM J. Optim. 27(4), 2597–2633 (2017)

    Article  MathSciNet  Google Scholar 

  15. Qu, G., Li, N.: Harnessing smoothness to accelerate distributed optimization. In: IEEE 55th Conference on Decision and Control (CDC), pp. 159–166. IEEE (2016)

  16. Mokhtari, A., Shi, W., Ling, Q., Ribeiro, A.: A decentralized second-order method for dynamic optimization. In: IEEE 55th Conference on Decision and Control (CDC), pp. 6036–6043. IEEE (2016)

  17. Zhu, M., Martínez, S.: Discrete-time dynamic average consensus. Automatica 46(2), 322–329 (2010)

    Article  MathSciNet  Google Scholar 

  18. Xu, J., Zhu, S., Soh, Y., Xie, L.: Augmented distributed gradient methods for multi-agent optimization under uncoordinated constant step sizes. In: Proceedings of the 54th IEEE Conference on Decision and Control (CDC), pp. 2055–2060 (2015)

  19. Di Lorenzo, P., Scutari, G.: NEXT: in-network nonconvex optimization. IEEE Trans. Signal Inf. Process. Netw. 2(2), 120–136 (2016)

    MathSciNet  Google Scholar 

  20. Nedić, A., Olshevsky, A., Shi, W., Uribe, C.A.: Geometrically convergent distributed optimization with uncoordinated step-sizes. In: American Control Conference (ACC), pp. 3950–3955. IEEE (2017)

  21. Pu, S., Shi, W., Xu, J., Nedić, A.: A push-pull gradient method for distributed optimization in networks. In: 2018 IEEE Conference on Decision and Control (CDC), pp. 3385–3390. IEEE (2018)

  22. Yuan, K., Ying, B., Zhao, X., Sayed, A.H.: Exact diffusion for distributed optimization and learning-part I: algorithm development. IEEE Trans. Signal Process. 67(3), 708–723 (2018)

    Article  MathSciNet  Google Scholar 

  23. Yuan, K., Ying, B., Zhao, X., Sayed, A.H.: Exact diffusion for distributed optimization and learning-part II: convergence analysis. IEEE Trans. Signal Process. 67(3), 724–739 (2018)

    Article  MathSciNet  Google Scholar 

  24. Seaman, K., Bach, F., Bubeck, S., Lee, Y.T., Massoulié, L.: Optimal algorithms for smooth and strongly convex distributed optimization in networks. In: Proceedings of the 34th International Conference on Machine Learning, vol. 70, pp. 3027–3036. JMLR. org (2017)

  25. Uribe, C.A., Lee, S., Gasnikov, A., Nedić, A.: A dual approach for optimal algorithms in distributed optimization over networks. arXiv preprint arXiv:1809.00710(2018)

  26. Shi, W., Ling, Q., Wu, G., Yin, W.: A proximal gradient algorithm for decentralized composite optimization. IEEE Trans. Signal Process. 63(22), 6013–6023 (2015)

    Article  MathSciNet  Google Scholar 

  27. Alghunaim, S., Yuan, K., Sayed, A.H.: A linearly convergent proximal gradient algorithm for decentralized optimization. In: Advances in Neural Information Processing Systems, pp. 2848–2858 (2019)

  28. Chen, A., Ozdaglar, A.: A fast distributed proximal-gradient method. In: the 50th Annual Allerton Conference on Communication, Control, and Computing (Allerton), pp. 601–608 (2012)

  29. Nedić, A., Olshevsky, A.: Distributed optimization over time-varying directed graphs. In: The 52nd IEEE Annual Conference on Decision and Control, pp. 6855–6860 (2013)

  30. Xi, C., Khan, U.: On the linear convergence of distributed optimization over directed graphs.arXiv preprintarXiv:1510.02149(2015)

  31. Zeng, J., Yin, W.: ExtraPush for convex smooth decentralized optimization over directed networks. J. Comput. Math. 35(4), 381–394 (2017)

    MathSciNet  MATH  Google Scholar 

  32. Sun, Y., Scutari, G., Palomar, D.: Distributed nonconvex multiagent optimization over time-varying networks. In: 2016 50th Asilomar Conference on Signals, Systems and Computers, pp. 788–794. IEEE (2016)

  33. Ling, Q., Ribeiro, A.: Decentralized dynamic optimization through the alternating direction method of multipliers. In: 2013 IEEE 14th Workshop on Signal Processing Advances in Wireless Communications (SPAWC), pp. 170–174. IEEE (2013)

  34. Nedić, A., Olshevsky, A.: Stochastic gradient-push for strongly convex functions on time-varying directed graphs. IEEE Trans. Autom. Control 61(12), 3936–3947 (2016)

    Article  MathSciNet  Google Scholar 

  35. Nedić, A.: Distributed optimization over networks. In: Multi-agent Optimization, pp. 1–84. Springer (2018)

  36. Li, Z., Yan, M.: A primal-dual algorithm with optimal stepsizes and its application in decentralized consensus optimization. arXiv preprintarXiv:1711.06785 (2017)

  37. Nesterov, Y.: Introductory Lectures on Convex Optimization: A Basic Course. Springer, New York (2013)

    MATH  Google Scholar 

Download references

Acknowledgements

This work is partially supported by NSF grants DMS-1621798 and DMS-2012439.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ming Yan.

Additional information

Communicated by Zenon Mróz.

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Li, Y., Yan, M. On the Linear Convergence of Two Decentralized Algorithms. J Optim Theory Appl 189, 271–290 (2021). https://doi.org/10.1007/s10957-021-01833-y

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10957-021-01833-y

Keywords

Mathematics Subject Classification