Online Reinforcement Learning with Uncertain Episode Lengths
DOI:
https://doi.org/10.1609/aaai.v37i7.26088Keywords:
ML: Reinforcement Learning Algorithms, ML: Online Learning & Bandits, RU: Sequential Decision MakingAbstract
Existing episodic reinforcement algorithms assume that the length of an episode is fixed across time and known a priori. In this paper, we consider a general framework of episodic reinforcement learning when the length of each episode is drawn from a distribution. We first establish that this problem is equivalent to online reinforcement learning with general discounting where the learner is trying to optimize the expected discounted sum of rewards over an infinite horizon, but where the discounting function is not necessarily geometric. We show that minimizing regret with this new general discounting is equivalent to minimizing regret with uncertain episode lengths. We then design a reinforcement learning algorithm that minimizes regret with general discounting but acts for the setting with uncertain episode lengths. We instantiate our general bound for different types of discounting, including geometric and polynomial discounting. We also show that we can obtain similar regret bounds even when the uncertainty over the episode lengths is unknown, by estimating the unknown distribution over time. Finally, we compare our learning algorithms with existing value-iteration based episodic RL algorithms on a grid-world environment.Downloads
Published
2023-06-26
How to Cite
Mandal, D., Radanovic, G., Gan, J., Singla, A., & Majumdar, R. (2023). Online Reinforcement Learning with Uncertain Episode Lengths. Proceedings of the AAAI Conference on Artificial Intelligence, 37(7), 9064-9071. https://doi.org/10.1609/aaai.v37i7.26088
Issue
Section
AAAI Technical Track on Machine Learning II