solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看20865次
Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization. (arXiv:1904.01196v1 [math.OC])
来源于:arXiv
In this work, we revisit a discrete implementation of the primal-descent
dual-ascent gradient method applied to saddle-point optimization problems.
Through a short proof, we establish linear (exponential) convergence of the
algorithm for strongly-convex cost functions with Lipschitz continuous
gradients. Unlike previous studies, we provide explicit upper bounds on the
step-size parameters for stable behavior and on the resulting convergence rate
and highlight the importance of having two different primal and dual
step-sizes. We also explain the effect of the augmented Lagrangian penalty term
on the algorithm stability and performance for the distributed minimization of
aggregate cost functions over multi-agent networks. 查看全文>>