adv

A single potential governing convergence of conjugate gradient, accelerated gradient and geometric descent. (arXiv:1712.09498v1 [math.OC])

Nesterov's accelerated gradient (AG) method for minimizing a smooth strongly convex function $f$ is known to reduce $f({\bf x}_k)-f({\bf x}^*)$ by a factor of $\epsilon\in(0,1)$ after $k=O(\sqrt{L/\ell}\log(1/\epsilon))$ iterations, where $\ell,L$ are the two parameters of smooth strong convexity. Furthermore, it is known that this is the best possible complexity in the function-gradient oracle model of computation. Modulo a line search, the geometric descent (GD) method of Bubeck, Lee and Singh has the same bound for this class of functions. The method of linear conjugate gradients (CG) also satisfies the same complexity bound in the special case of strongly convex quadratic functions, but in this special case it can be faster than the AG and GD methods. Despite similarities in the algorithms and their asymptotic convergence rates, the conventional analysis of the running time of CG is mostly disjoint from that of AG and GD. The analyses of the AG and GD methods are also rather distin查看全文

Solidot 文章翻译

你的名字

留空匿名提交
你的Email或网站

用户可以联系你
标题

简单描述
内容