solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看6001次
Frank-Wolfe Method is Automatically Adaptive to Error Bound Condition. (arXiv:1810.04765v1 [math.OC])
来源于:arXiv
Error bound condition has recently gained revived interest in optimization.
It has been leveraged to derive faster convergence for many popular algorithms,
including subgradient methods, proximal gradient method and accelerated
proximal gradient method. However, it is still unclear whether the Frank-Wolfe
(FW) method can enjoy faster convergence under error bound condition. In this
short note, we give an affirmative answer to this question. We show that the FW
method (with a line search for the step size) for optimization over a strongly
convex set is automatically adaptive to the error bound condition of the
problem. In particular, the iteration complexity of FW can be characterized by
$O(\max(1/\epsilon^{1-\theta}, \log(1/\epsilon)))$ where $\theta\in[0,1]$ is a
constant that characterizes the error bound condition. Our results imply that
if the constrained set is characterized by a strongly convex function and the
objective function can achieve a smaller value outside the considered 查看全文>>