solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看269次
Performance Limits of Stochastic Sub-Gradient Learning, Part II: Multi-Agent Case. (arXiv:1704.06025v1 [math.OC])
来源于:arXiv
The analysis in Part I revealed interesting properties for subgradient
learning algorithms in the context of stochastic optimization when gradient
noise is present. These algorithms are used when the risk functions are
non-smooth and involve non-differentiable components. They have been long
recognized as being slow converging methods. However, it was revealed in Part I
that the rate of convergence becomes linear for stochastic optimization
problems, with the error iterate converging at an exponential rate $\alpha^i$
to within an $O(\mu)-$neighborhood of the optimizer, for some $\alpha \in
(0,1)$ and small step-size $\mu$. The conclusion was established under weaker
assumptions than the prior literature and, moreover, several important problems
(such as LASSO, SVM, and Total Variation) were shown to satisfy these weaker
assumptions automatically (but not the previously used conditions from the
literature). These results revealed that sub-gradient learning methods have
more favorable be 查看全文>>