solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看1519次
A Variance Reduction Method for Non-Convex Optimization with Improved Convergence under Large Condition Number. (arXiv:1809.06754v1 [math.OC])
来源于:arXiv
In this paper, we propose a new SVRG-style acceleated stochastic algorithm
for solving a family of non-convex optimization problems whose objective
consists of a sum of $n$ smooth functions and a non-smooth convex function. Our
major goal is to improve the convergence of SVRG-style stochastic algorithms to
stationary points under a setting with a large condition number $c$ - the ratio
between the smoothness constant and the negative curvature constant. The
proposed algorithm achieves the best known gradient complexity when $c\geq
\Omega(n)$, which was achieved previously by a SAGA-style accelerated
stochastic algorithm. Compared with the SAGA-style accelerated stochastic
algorithm, the proposed algorithm is more practical due to its low memory cost
that is inherited from previous SVRG-style algorithms. Compared with previous
studies on SVRG-style stochastic algorithms, our theory provides much stronger
results in terms of (i) reduced gradient complexity under a large condition
number; 查看全文>>