Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                

Deep Hierarchical Reinforcement Learning Algorithm in Partially Observable Markov Decision Processes release_k3k6sbgd35gmxmnoanqbgknery

by Le Pham Tuyen, Ngo Anh Vien, Abu Layek, TaeChoong Chung

Released as a article .

2018  

Abstract

In recent years, reinforcement learning has achieved many remarkable successes due to the growing adoption of deep learning techniques and the rapid growth in computing power. Nevertheless, it is well-known that flat reinforcement learning algorithms are often not able to learn well and data-efficient in tasks having hierarchical structures, e.g. consisting of multiple subtasks. Hierarchical reinforcement learning is a principled approach that is able to tackle these challenging tasks. On the other hand, many real-world tasks usually have only partial observability in which state measurements are often imperfect and partially observable. The problems of RL in such settings can be formulated as a partially observable Markov decision process (POMDP). In this paper, we study hierarchical RL in POMDP in which the tasks have only partial observability and possess hierarchical properties. We propose a hierarchical deep reinforcement learning approach for learning in hierarchical POMDP. The deep hierarchical RL algorithm is proposed to apply to both MDP and POMDP learning. We evaluate the proposed algorithm on various challenging hierarchical POMDP.
In text/plain format

Archived Files and Locations

application/pdf  2.2 MB
file_5brm2drif5hh5gzhlhy6akp4za
arxiv.org (repository)
web.archive.org (webarchive)
Read Archived PDF
Preserved and Accessible
Type  article
Stage   submitted
Date   2018-05-11
Version   v1
Language   en ?
arXiv  1805.04419v1
Work Entity
access all versions, variants, and formats of this works (eg, pre-prints)
Catalog Record
Revision: 5f873748-1160-4594-afc5-abdb38a627b0
API URL: JSON