Abstract
In deep reinforcement learning, it is difficult to converge when the exploration is insufficient or a reward is sparse. Besides, on specific tasks, the amount of exploration may be limited. Therefore, it is considered effective to learn on source tasks that were previously for promoting learning on the target tasks. Existing researches have proposed pretraining methods for learning parameters that enable fast learning on multiple tasks. However, these methods are still limited by several problems, such as sparse reward, deviation of samples, dependence on initial parameters. In this research, we propose a pretraining method to train a model that can work well on variety of target tasks and solve the above problems with an evolutionary algorithm and policy gradients method. In this method, agents explore multiple environments with a diverse set of neural networks to train a general model with evolutionary algorithm and policy gradients method. In the experiments, we assume multiple 3D control source tasks. After the model training with our method on the source tasks, we show how effective the model is for the 3D control tasks of the target tasks.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Similar content being viewed by others
References
Andrychowicz, M., Baker, B., Chociej, M., Józefowicz, R., McGrew, B., Pachocki, J.W., Petron, A., Plappert, M., Powell, G., Ray, A., Schneider, J., Sidor, S., Tobin, J., Welinder, P., Weng, L., Zaremba, W.: Learning dexterous in-hand manipulation. CoRR arXiv:1808.00177 (2018)
Bellemare, M.G., Naddaf, Y., Veness, J., Bowling, M.: The arcade learning environment: an evaluation platform for general agents. CoRR arXiv:1207.4708 (2012)
Brockman, G., Cheung, V., Pettersson, L., Schneider, J., Schulman, J., Tang, J., Zaremba, W.: OpenAI gym. CoRR arXiv:1606.01540 (2016)
Coumans, E., Bai, Y.: Pybullet, a python module for physics simulation for games, robotics and machine learning (2016–2019). http://pybullet.org
Duan, Y., Chen, X., Houthooft, R., Schulman, J., Abbeel, P.: Benchmarking deep reinforcement learning for continuous control. CoRR arXiv:1604.06778 (2016)
Espeholt, L., Soyer, H., Munos, R., Simonyan, K., Mnih, V., Ward, T., Doron, Y., Firoiu, V., Harley, T., Dunning, I., Legg, S., Kavukcuoglu, K.: IMPALA: scalable distributed deep-RL with importance weighted actor-learner architectures. CoRR arXiv:1802.01561 (2018)
Fernando, C., Banarse, D., Blundell, C., Zwols, Y., Ha, D., Rusu, A.A., Pritzel, A., Wierstra, D.: Pathnet: evolution channels gradient descent in super neural networks. CoRR arXiv:1701.08734 (2017)
Finn, C., Abbeel, P., Levine, S.: Model-agnostic meta-learning for fast adaptation of deep networks. In: Precup, D., Teh, Y.W. (eds.) Proceedings of the 34th International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 70, pp. 1126–1135. PMLR, International Convention Centre, Sydney, Australia (2017). http://proceedings.mlr.press/v70/finn17a.html
Floreano, D., Durr, P., Mattiussi, C.: Neuroevolution: from architectures to learning (2008)
French, R.M.: Catastrophic forgetting in connectionist networks. Trends Cogn. Sci. 3, 128–135 (1999)
Fujimoto, S., van Hoof, H., Meger, D.: Addressing function approximation error in actor-critic methods. CoRR arXiv:1802.09477 (2018)
Hasselt, H.V.: In: Lafferty, J.D., Williams, C.K.I., Shawe-Taylor, J., Zemel, R.S., Culotta, A. (eds.) Advances in Neural Information Processing Systems, vol. 23, pp. 2613–2621. Curran Associates, Inc. (2010). http://papers.nips.cc/paper/3964-double-q-learning.pdf
Hasselt, H.V., Guez, A., Silver, D.: Deep reinforcement learning with double Q-learning. In: Proceedings of the Thirtieth AAAI Conference on Artificial Intelligence, AAAI’16, pp. 2094–2100. AAAI Press (2016). http://dl.acm.org/citation.cfm?id=3016100.3016191
Hausknecht, M., Lehman, J., Miikkulainen, R., Stone, P.: A neuroevolution approach to general atari game playing. IEEE Trans. Comput. Intell. AI Games (2013). http://nn.cs.utexas.edu/?hausknecht:tciaig14
Henderson, P., Islam, R., Bachman, P., Pineau, J., Precup, D., Meger, D.: Deep reinforcement learning that matters. CoRR arXiv:1709.06560 (2017)
Hinton, G., Vinyals, O., Dean, J.: Distilling the knowledge in a neural network. In: NIPS Deep Learning and Representation Learning Workshop. arXiv:1503.02531 (2015)
Hornik, K., Stinchcombe, M., White, H.: Multilayer feedforward networks are universal approximators. Neural Netw. 2(5), 359–366 (1989). https://doi.org/10.1016/0893-6080(89)90020-8
James, S., Wohlhart, P., Kalakrishnan, M., Kalashnikov, D., Irpan, A., Ibarz, J., Levine, S., Hadsell, R., Bousmalis, K.: Sim-to-real via sim-to-sim: data-efficient robotic grasping via randomized-to-canonical adaptation networks. CoRR arXiv:1812.07252 (2018)
Kapturowski, S., Ostrovski, G., Dabney, W., Quan, J., Munos, R.: Recurrent experience replay in distributed reinforcement learning. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=r1lyTjAqYX
Khadka, S., Tumer, K.: Evolution-guided policy gradient in reinforcement learning. In: Bengio, S., Wallach, H., Larochelle,H., Grauman, K., Cesa-Bianchi, N., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 31, pp. 1196–1208. Curran Associates, Inc. (2018). http://papers.nips.cc/paper/7395-evolution-guided-policy-gradient-in-reinforcement-learning.pdf
Kingma, D.P., Ba, J.: Adam: a method for stochastic optimization. CoRR arXiv:1412.6980 (2014)
Kolen, J.F., Pollack, J.B.: Back propagation is sensitive to initial conditions. In: Proceedings of the 1990 Conference on Advances in Neural Information Processing Systems 3, NIPS-3, pp. 860–867. Morgan Kaufmann Publishers Inc., San Francisco, CA, USA (1990). http://dl.acm.org/citation.cfm?id=118850.119960
Krizhevsky, A., Sutskever, I., Hinton, G.E.: Imagenet classification with deep convolutional neural networks. In: Pereira, F., Burges, C.J.C., Bottou, L., Weinberger, K.Q. (eds.) Advances in Neural Information Processing Systems, vol. 25, pp. 1097–1105. Curran Associates, Inc. (2012). http://papers.nips.cc/paper/4824-imagenet-classification-with-deep-convolutional-neural-networks.pdf
Levine, S., Finn, C., Darrell, T., Abbeel, P.: End-to-end training of deep visuomotor policies. J. Mach. Learn. Res. 17(39), 1–40 (2016). http://jmlr.org/papers/v17/15-522.html
Levine, S., Pastor, P., Krizhevsky, A., Quillen, D.: Learning hand-eye coordination for robotic grasping with deep learning and large-scale data collection. CoRR arXiv:1603.02199 (2016)
Lillicrap, T.P., Hunt, J.J., Pritzel, A., Heess, N., Erez, T., Tassa, Y., Silver, D., Wierstra, D.: Continuous control with deep reinforcement learning. CoRR arXiv:1509.02971 (2015)
Mnih, V., Kavukcuoglu, K., Silver, D., Graves, A., Antonoglou, I., Wierstra, D., Riedmiller, M.: Playing atari with deep reinforcement learning. In: NIPS Deep Learning Workshop (2013)
Mnih, V., Kavukcuoglu, K., Silver, D., Rusu, A.A., Veness, J., Bellemare, M.G., Graves, A., Riedmiller, M., Fidjeland, A.K., Ostrovski, G., Petersen, S., Beattie, C., Sadik, A., Antonoglou, I., King, H., Kumaran, D., Wierstra, D., Legg, S., Hassabis, D.: Human-level control through deep reinforcement learning. Nature 518(7540), 529–533 (2015). https://doi.org/10.1038/nature14236
Pieter-Tjerk, Kroese, D.P., Mannor, S., Rubinstein, R.Y.: A tutorial on the cross-entropy method. Ann. Oper. Res. 134(1), 19–67 (2005). https://doi.org/10.1007/s10479-005-5724-z
Pourchot, Sigaud: CEM-RL: combining evolutionary and gradient-based methods for policy search. In: International Conference on Learning Representations (2019). https://openreview.net/forum?id=BkeU5j0ctQ
Salimans, T., Ho, J., Chen, X., Sutskever, I.: Evolution strategies as a scalable alternative to reinforcement learning. CoRR arXiv:1703.03864 (2017)
Schaul, T., Quan, J., Antonoglou, I., Silver, D.: Prioritized experience replay. CoRR arXiv:1511.05952 (2016)
Silver, D., Huang, A., Maddison, C.J., Guez, A., Sifre, L., van den Driessche, G., Schrittwieser, J., Antonoglou, I., Panneershelvam, V., Lanctot, M., Dieleman, S., Grewe, D., Nham, J., Kalchbrenner, N., Sutskever, I., Lillicrap, T., Leach, M., Kavukcuoglu, K., Graepel, T., Hassabis, D.: Mastering the game of Go with deep neural networks and tree search. Nature 529(7587), 484–489 (2016). https://doi.org/10.1038/nature16961
Silver, D., Lever, G., Heess, N., Degris, T., Wierstra, D., Riedmiller, M.: Deterministic policy gradient algorithms. In: Xing, E.P., Jebara, T. (eds.) Proceedings of the 31st International Conference on Machine Learning, Proceedings of Machine Learning Research, vol. 32, pp. 387–395. PMLR, Bejing, China (2014). http://proceedings.mlr.press/v32/silver14.html
Stanley, K.O., Miikkulainen, R.: Evolving neural networks through augmenting topologies. Evol. Comput. 10(2), 99–127 (2002). https://doi.org/10.1162/106365602320169811
Sutton, R.S., Barto, A.G.: Introduction to Reinforcement Learning, 1st edn. MIT Press, Cambridge, MA, USA (1998)
Teh, Y., Bapst, V., Czarnecki, W.M., Quan, J., Kirkpatrick, J., Hadsell, R., Heess, N., Pascanu, R.: Distral: robust multitask reinforcement learning. In: Guyon, I., Luxburg, U.V., Bengio, S., Wallach, H., Fergus, R., Vishwanathan, S., Garnett, R. (eds.) Advances in Neural Information Processing Systems, vol. 30, pp. 4496–4506. Curran Associates, Inc. (2017). http://papers.nips.cc/paper/7036-distral-robust-multitask-reinforcement-learning.pdf
Watkins, C.J.C.H., Dayan, P.: Q-learning. Mach. Learn. 8(3), 279–292 (1992). https://doi.org/10.1007/BF00992698
Acknowledgements
This research was supported by JSPS KAKENHI Grant Numbers 16K00419, 16K12411, 17H04705, 18H03229, 18H03340.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2020 Springer Nature Switzerland AG
About this chapter
Cite this chapter
Imai, S., Sei, Y., Tahara, Y., Orihara, R., Ohsuga, A. (2020). Multi-task Deep Reinforcement Learning with Evolutionary Algorithm and Policy Gradients Method in 3D Control Tasks. In: Lee, R. (eds) Big Data, Cloud Computing, and Data Science Engineering. BCD 2019. Studies in Computational Intelligence, vol 844. Springer, Cham. https://doi.org/10.1007/978-3-030-24405-7_2
Download citation
DOI: https://doi.org/10.1007/978-3-030-24405-7_2
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-030-24404-0
Online ISBN: 978-3-030-24405-7
eBook Packages: Intelligent Technologies and RoboticsIntelligent Technologies and Robotics (R0)