adv

Distributed heavy-ball: A generalization and acceleration of first-order methods with gradient tracking. (arXiv:1808.02942v1 [math.OC])

来源于:arXiv
We study distributed optimization to minimize a global objective that is a sum of smooth and strongly-convex local cost functions. Recently, several algorithms over undirected and directed graphs have been proposed that use a gradient tracking method to achieve linear convergence to the global minimizer. However, a connection between these different approaches has been unclear. In this paper, we first show that many of the existing first-order algorithms are in fact related with a simple state transformation, at the heart of which lies the~$\mc{AB}$ algorithm. We then describe \textit{distributed heavy-ball}, denoted as~$\mc{AB}m$, i.e.,~$\mc{AB}$ with momentum, that combines gradient tracking with a momentum term and uses nonidentical local step-sizes. By~simultaneously implementing both row- and column-stochastic weights,~$\mc{AB}m$ removes the conservatism in the related work due to doubly-stochastic weights or eigenvector estimation.~$\mc{AB}m$ thus naturally leads to optimization 查看全文>>

发表评论