Reinforcement learning algorithm for partially observable Markov decision problems

T Jaakkola, S Singh, M Jordan - Advances in neural …, 1994 - proceedings.neurips.cc
Advances in neural information processing systems, 1994proceedings.neurips.cc
Increasing attention has been paid to reinforcement learning algo (cid: 173) rithms in recent
years, partly due to successes in the theoretical analysis of their behavior in Markov
environments. If the Markov assumption is removed, however, neither generally the
algorithms nor the analyses continue to be usable. We propose and analyze a new learning
algorithm to solve a certain class of non-Markov decision problems. Our algorithm applies to
problems in which the environment is Markov, but the learner has restricted access to state …
Abstract
Increasing attention has been paid to reinforcement learning algo (cid: 173) rithms in recent years, partly due to successes in the theoretical analysis of their behavior in Markov environments. If the Markov assumption is removed, however, neither generally the algorithms nor the analyses continue to be usable. We propose and analyze a new learning algorithm to solve a certain class of non-Markov decision problems. Our algorithm applies to problems in which the environment is Markov, but the learner has restricted access to state information. The algorithm involves a Monte-Carlo pol (cid: 173) icy evaluation combined with a policy improvement method that is similar to that of Markov decision problems and is guaranteed to converge to a local maximum. The algorithm operates in the space of stochastic policies, a space which can yield a policy that per (cid: 173) forms considerably better than any deterministic policy. Although the space of stochastic policies is continuous-even for a discrete action space-our algorithm is computationally tractable.
proceedings.neurips.cc
Showing the best result for this search. See all results