adv

Linear Convergence of Primal-Dual Gradient Methods and their Performance in Distributed Optimization. (arXiv:1904.01196v1 [math.OC])

In this work, we revisit a discrete implementation of the primal-descent dual-ascent gradient method applied to saddle-point optimization problems. Through a short proof, we establish linear (exponential) convergence of the algorithm for strongly-convex cost functions with Lipschitz continuous gradients. Unlike previous studies, we provide explicit upper bounds on the step-size parameters for stable behavior and on the resulting convergence rate and highlight the importance of having two different primal and dual step-sizes. We also explain the effect of the augmented Lagrangian penalty term on the algorithm stability and performance for the distributed minimization of aggregate cost functions over multi-agent networks.查看全文

Solidot 文章翻译

你的名字

留空匿名提交
你的Email或网站

用户可以联系你
标题

简单描述
内容