Multiplicative Weights Updates as a distributed constrained optimization algorithm: Convergence to second-order stationary points almost always. (arXiv:1810.05355v1 [math.OC])

来源于:arXiv
Non-concave maximization has been the subject of much recent study in the optimization and machine learning communities, specifically in deep learning. Recent papers ((Ge \etal 2015, Lee \etal 2017) and references therein) indicate that first order methods work well and avoid saddles points. Results as in (Lee \etal 2017), however, are limited to the \textit{unconstrained} case or for cases where the critical points are in the interior of the feasibility set, which fail to capture some of the most interesting applications. In this paper we focus on \textit{constrained} non-concave maximization. We analyze a variant of a well-established algorithm in machine learning called Multiplicative Weights Update (MWU) for the maximization problem $\max_{\mathbf{x} \in D} P(\mathbf{x})$, where $P$ is non-concave, twice continuously differentiable and $D$ is a product of simplices. We show that MWU converges almost always for small enough stepsizes to critical points that satisfy the second order 查看全文>>