Asynchronous federated optimization

C Xie, S Koyejo, I Gupta - arXiv preprint arXiv:1903.03934, 2019 - arxiv.org
arXiv preprint arXiv:1903.03934, 2019arxiv.org
Federated learning enables training on a massive number of edge devices. To improve
flexibility and scalability, we propose a new asynchronous federated optimization algorithm.
We prove that the proposed approach has near-linear convergence to a global optimum, for
both strongly convex and a restricted family of non-convex problems. Empirical results show
that the proposed algorithm converges quickly and tolerates staleness in various
applications.
Federated learning enables training on a massive number of edge devices. To improve flexibility and scalability, we propose a new asynchronous federated optimization algorithm. We prove that the proposed approach has near-linear convergence to a global optimum, for both strongly convex and a restricted family of non-convex problems. Empirical results show that the proposed algorithm converges quickly and tolerates staleness in various applications.
arxiv.org