solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看2923次
A Sharp Convergence Rate Analysis for Distributed Accelerated Gradient Methods. (arXiv:1810.01053v1 [math.OC])
来源于:arXiv
In this paper, we study the computation and communication costs in
decentralized distributed optimization and give a sharp complexity analysis for
the proposed distributed accelerated gradient methods. We present two
algorithms based on the framework of the accelerated penalty method with
increasing penalty parameters. Our first algorithm achieves the
$O\left(\left(\epsilon\sqrt{1-\sigma_2(W)}\right)^{-1}\right)$ complexities for
both computation and communication, which match the communication complexity
lower bound for non-smooth distributed optimization, where $\sigma_2(W)$
denotes the second largest singular value of the weight matrix $W$ associated
to the network. Our second algorithm employs a double-loop and obtains the near
optimal
$O\left(\sqrt{L/\left(\epsilon(1-\sigma_2(W))\right)}\log\epsilon^{-1}\right)$
communication complexity and the optimal $O\left(\sqrt{L/\epsilon}\right)$
computation complexity for $L$-smooth distributed optimization. When the
problem is $\mu$-strong 查看全文>>