solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看153次
Harnessing Smoothness to Accelerate Distributed Optimization. (arXiv:1605.07112v2 [math.OC] UPDATED)
来源于:arXiv
There has been a growing effort in studying the distributed optimization
problem over a network. The objective is to optimize a global function formed
by a sum of local functions, using only local computation and communication.
Literature has developed consensus-based distributed (sub)gradient descent
(DGD) methods and has shown that they have the same convergence rate
$O(\frac{\log t}{\sqrt{t}})$ as the centralized (sub)gradient methods (CGD)
when the function is convex but possibly nonsmooth. However, when the function
is convex and smooth, under the framework of DGD, it is unclear how to harness
the smoothness to obtain a faster convergence rate comparable to CGD's
convergence rate. In this paper, we propose a distributed algorithm that,
despite using the same amount of communication per iteration as DGD, can
effectively harnesses the function smoothness and converge to the optimum with
a rate of $O(\frac{1}{t})$. If the objective function is further strongly
convex, our algorithm h 查看全文>>