solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看2051次
Global convergence rates of augmented Lagrangian methods for constrained convex programming. (arXiv:1711.05812v1 [math.OC])
来源于:arXiv
Augmented Lagrangian method (ALM) has been popularly used for solving
constrained optimization problems. Its convergence and local convergence speed
have been extensively studied. However, its global convergence rate is still
open for problems with nonlinear inequality constraints. In this paper, we work
on general constrained convex programs. For these problems, we establish the
global convergence rate of ALM and its inexact variants.
We first assume exact solution to each subproblem in the ALM framework and
establish an $O(1/k)$ ergodic convergence result, where $k$ is the number of
iterations. Then we analyze an inexact ALM that approximately solves the
subproblems. Assuming summable errors, we prove that the inexact ALM also
enjoys $O(1/k)$ convergence if smaller stepsizes are used in the multiplier
updates. Furthermore, we apply the inexact ALM to a constrained composite
convex problem with each subproblem solved by Nesterov's optimal first-order
method. We show that $O(\varepsilo 查看全文>>