Distributed linearized alternating direction method of multipliers for composite convex consensus optimization

NS Aybat, Z Wang, T Lin, S Ma - IEEE Transactions on …, 2017 - ieeexplore.ieee.org
NS Aybat, Z Wang, T Lin, S Ma
IEEE Transactions on Automatic Control, 2017ieeexplore.ieee.org
Given an undirected graph G=(N, E) of agents N={1,..., N} connected with edges in E, we
study how to compute an optimal decision on which there is consensus among agents and
that minimizes the sum of agent-specific private convex composite functions {Φ i} i∈ N,
where Φ i≐ ξ i+ fi belongs to agent-i. Assuming only agents connected by an edge can
communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus
optimization over both unweighted and weighted static (undirected) communication …
Given an undirected graph G = (N, E) of agents N = {1,..., N} connected with edges in E, we study how to compute an optimal decision on which there is consensus among agents and that minimizes the sum of agent-specific private convex composite functions {Φ i } i∈N , where Φ i ≐ ξ i + f i belongs to agent-i. Assuming only agents connected by an edge can communicate, we propose a distributed proximal gradient algorithm (DPGA) for consensus optimization over both unweighted and weighted static (undirected) communication networks. In one iteration, each agent-i computes the prox map of ξ i and gradient of f i , and this is followed by local communication with neighboring agents. We also study its stochastic gradient variant, SDPGA, which can only access to noisy estimates of ∇f i at each agent-i. This computational model abstracts a number of applications in distributed sensing, machine learning and statistical inference. We show ergodic convergence in both suboptimality error and consensus violation for the DPGA and SDPGA with rates O(1/t) and O(1/√t), respectively.
ieeexplore.ieee.org