solidot新版网站常见问题,请点击这里查看。
消息
本文已被查看1571次
Finite Model Approximations for Partially Observed Markov Decision Processes with Discounted Cost. (arXiv:1710.07009v1 [cs.SY])
来源于:arXiv
We consider finite model approximations of discrete-time partially observed
Markov decision processes (POMDPs) under the discounted cost criterion. After
converting the original partially observed stochastic control problem to a
fully observed one on the belief space, the finite models are obtained through
the uniform quantization of the state and action spaces of the belief space
Markov decision process (MDP). Under mild assumptions on the components of the
original model, it is established that the policies obtained from these finite
models are nearly optimal for the belief space MDP, and so, for the original
partially observed problem. The assumptions essentially require that the belief
space MDP satisfies a mild weak continuity condition. We provide examples and
introduce explicit approximation procedures for the quantization of the set of
probability measures on the state space of POMDP (i.e., belief space). 查看全文>>