Asynchronous federated optimization
Federated learning enables training on a massive number of edge devices. To improve
flexibility and scalability, we propose a new asynchronous federated optimization algorithm.
We prove that the proposed approach has near-linear convergence to a global optimum, for
both strongly convex and a restricted family of non-convex problems. Empirical results show
that the proposed algorithm converges quickly and tolerates staleness in various
applications.
flexibility and scalability, we propose a new asynchronous federated optimization algorithm.
We prove that the proposed approach has near-linear convergence to a global optimum, for
both strongly convex and a restricted family of non-convex problems. Empirical results show
that the proposed algorithm converges quickly and tolerates staleness in various
applications.
Federated learning enables training on a massive number of edge devices. To improve flexibility and scalability, we propose a new asynchronous federated optimization algorithm. We prove that the proposed approach has near-linear convergence to a global optimum, for both strongly convex and a restricted family of non-convex problems. Empirical results show that the proposed algorithm converges quickly and tolerates staleness in various applications.
arxiv.org