solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看21326次
Exponential Convergence and stability of Howards's Policy Improvement Algorithm for Controlled Diffusions. (arXiv:1812.07846v1 [math.OC])
来源于:arXiv
Optimal control problems are inherently hard to solve as the optimization
must be performed simultaneously with updating the underlying system. Starting
from an initial guess, Howard's policy improvement algorithm separates the step
of updating the trajectory of the dynamical system from the optimization and
iterations of this should converge to the optimal control. In the discrete
space-time setting this is often the case and even rates of convergence are
known. In the continuous space-time setting of controlled diffusion the
algorithm consists of solving a linear PDE followed by maximization problem.
This has been shown to converge, in some situations, however no global rate of
is known. The first main contribution of this paper is to establish global rate
of convergence for the policy improvement algorithm and a variant, called here
the gradient iteration algorithm. The second main contribution is the proof of
stability of the algorithms under perturbations to both the accuracy of t 查看全文>>