solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看1544次
Exact worst-case convergence rates of the proximal gradient method for composite convex minimization. (arXiv:1705.04398v2 [math.OC] UPDATED)
来源于:arXiv
We study the worst-case convergence rates of the proximal gradient method for
minimizing the sum of a smooth strongly convex function and a non-smooth convex
function whose proximal operator is available.
We establish the exact worst-case convergence rates of the proximal gradient
method in this setting for any step size and for different standard performance
measures: objective function accuracy, distance to optimality and residual
gradient norm.
The proof methodology relies on recent developments in performance estimation
of first-order methods based on semidefinite programming. In the case of the
proximal gradient method, this methodology allows obtaining exact and
non-asymptotic worst-case guarantees that are conceptually very simple,
although apparently new.
On the way, we discuss how strong convexity can be replaced by weaker
assumptions, while preserving the corresponding convergence rates. We also
establish that the same fixed step size policy is optimal for all three
performan 查看全文>>