solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看843次
Convergence of Variance-Reduced Stochastic Learning under Random Reshuffling. (arXiv:1708.01383v1 [cs.LG])
来源于:arXiv
Several useful variance-reduced stochastic gradient algorithms, such as SVRG,
SAGA, Finito, and SAG, have been proposed to minimize empirical risks with
linear convergence properties to the exact minimizers. The existing convergence
results assume uniform data sampling with replacement. However, it has been
observed that random reshuffling can deliver superior performance. No formal
proofs or guarantees of exact convergence exist for variance-reduced algorithms
under random reshuffling. This paper resolves this open convergence issue and
provides the first theoretical guarantee of linear convergence under random
reshuffling for SAGA; the argument is also adaptable to other variance-reduced
algorithms. Under random reshuffling, the paper further proposes a new
amortized variance-reduced gradient (AVRG) algorithm with constant storage
requirements compared to SAGA and with balanced gradient computations compared
to SVRG. The balancing in computations are attained by amortizing the full
g 查看全文>>