solidot新版网站常见问题,请点击这里查看。

Exponential Convergence and stability of Howards's Policy Improvement Algorithm for Controlled Diffusions. (arXiv:1812.07846v1 [math.OC])

Optimal control problems are inherently hard to solve as the optimization must be performed simultaneously with updating the underlying system. Starting from an initial guess, Howard's policy improvement algorithm separates the step of updating the trajectory of the dynamical system from the optimization and iterations of this should converge to the optimal control. In the discrete space-time setting this is often the case and even rates of convergence are known. In the continuous space-time setting of controlled diffusion the algorithm consists of solving a linear PDE followed by maximization problem. This has been shown to converge, in some situations, however no global rate of is known. The first main contribution of this paper is to establish global rate of convergence for the policy improvement algorithm and a variant, called here the gradient iteration algorithm. The second main contribution is the proof of stability of the algorithms under perturbations to both the accuracy of t查看全文

Solidot 文章翻译

你的名字

留空匿名提交
你的Email或网站

用户可以联系你
标题

简单描述
内容