solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看3374次
A fast quasi-Newton-type method for large-scale stochastic optimisation. (arXiv:1810.01269v1 [math.OC])
来源于:arXiv
During recent years there has been an increased interest in stochastic
adaptations of limited memory quasi-Newton methods, which compared to pure
gradient-based routines can improve the convergence by incorporating second
order information. In this work we propose a direct least-squares approach
conceptually similar to the limited memory quasi-Newton methods, but that
computes the search direction in a slightly different way. This is achieved in
a fast and numerically robust manner by maintaining a Cholesky factor of low
dimension. This is combined with a stochastic line search relying upon
fulfilment of the Wolfe condition in a backtracking manner, where the step
length is adaptively modified with respect to the optimisation progress. We
support our new algorithm by providing several theoretical results guaranteeing
its performance. The performance is demonstrated on real-world benchmark
problems which shows improved results in comparison with already established
methods. 查看全文>>