Rainbow: Combining Improvements in Deep Reinforcement Learning

Authors

  • Matteo Hessel DeepMind
  • Joseph Modayil DeepMind
  • Hado van Hasselt DeepMind
  • Tom Schaul DeepMind
  • Georg Ostrovski DeepMind
  • Will Dabney DeepMind
  • Dan Horgan DeepMind
  • Bilal Piot DeepMind
  • Mohammad Azar DeepMind
  • David Silver DeepMind

DOI:

https://doi.org/10.1609/aaai.v32i1.11796

Keywords:

deep reinforcement learning

Abstract

The deep reinforcement learning community has made several independent improvements to the DQN algorithm. However, it is unclear which of these extensions are complementary and can be fruitfully combined. This paper examines six extensions to the DQN algorithm and empirically studies their combination. Our experiments show that the combination provides state-of-the-art performance on the Atari 2600 benchmark, both in terms of data efficiency and final performance. We also provide results from a detailed ablation study that shows the contribution of each component to overall performance.

Downloads

Published

2018-04-29

How to Cite

Hessel, M., Modayil, J., van Hasselt, H., Schaul, T., Ostrovski, G., Dabney, W., Horgan, D., Piot, B., Azar, M., & Silver, D. (2018). Rainbow: Combining Improvements in Deep Reinforcement Learning. Proceedings of the AAAI Conference on Artificial Intelligence, 32(1). https://doi.org/10.1609/aaai.v32i1.11796