solidot新版网站常见问题,请点击这里查看。

A decentralized proximal-gradient method with network independent step-sizes and separated convergence rates. (arXiv:1704.07807v1 [math.OC])

来源于:arXiv
This paper considers the problem of decentralized optimization with a composite objective containing smooth and non-smooth terms. To solve the problem, a proximal-gradient scheme is studied. Specifically, the smooth and nonsmooth terms are dealt with by gradient update and proximal update, respectively. The studied algorithm is closely related to a previous decentralized optimization algorithm, PG-EXTRA [37], but has a few advantages. First of all, in our new scheme, agents use uncoordinated step-sizes and the stable upper bounds on step-sizes are independent from network topology. The step-sizes depend on local objective functions, and they can be as large as that of the gradient descent. Secondly, for the special case without non-smooth terms, linear convergence can be achieved under the strong convexity assumption. The dependence of the convergence rate on the objective functions and the network are separated, and the convergence rate of our new scheme is as good as one of the two c 查看全文>>