Dynamic deep-reinforcement-learning algorithm in Partially Observed Markov Decision Processes

S Omi, HS Shin, N Cho, A Tsourdos - arXiv preprint arXiv:2307.15931, 2023 - arxiv.org
arXiv preprint arXiv:2307.15931, 2023arxiv.org
Reinforcement learning has been greatly improved in recent studies and an increased
interest in real-world implementation has emerged in recent years. In many cases, due to the
non-static disturbances, it becomes challenging for the agent to keep the performance. The
disturbance results in the environment called Partially Observable Markov Decision Process.
In common practice, Partially Observable Markov Decision Process is handled by
introducing an additional estimator, or Recurrent Neural Network is utilized in the context of …
Reinforcement learning has been greatly improved in recent studies and an increased interest in real-world implementation has emerged in recent years. In many cases, due to the non-static disturbances, it becomes challenging for the agent to keep the performance. The disturbance results in the environment called Partially Observable Markov Decision Process. In common practice, Partially Observable Markov Decision Process is handled by introducing an additional estimator, or Recurrent Neural Network is utilized in the context of reinforcement learning. Both of the cases require to process sequential information on the trajectory. However, there are only a few studies investigating the effect of information to consider and the network structure to handle them. This study shows the benefit of action sequence inclusion in order to solve Partially Observable Markov Decision Process. Several structures and approaches are proposed to extend one of the latest deep reinforcement learning algorithms with LSTM networks. The developed algorithms showed enhanced robustness of controller performance against different types of external disturbances that are added to observation.
arxiv.org