Abstract
This paper describes a new adaptive Monte Carlo Tree Search (MCTS) algorithm that uses evolution to rapidly optimise its performance. An evolutionary algorithm is used as a source of control parameters to modify the behaviour of each iteration (i.e. each simulation or roll-out) of the MCTS algorithm; in this paper we largely restrict this to modifying the behaviour of the random default policy, though it can also be applied to modify the tree policy.
This method of tightly integrating evolution into the MCTS algorithm means that evolutionary adaptation occurs on a much faster time-scale than has previously been achieved, and addresses a particular problem with MCTS which frequently occurs in real-time video and control problems: that uniform random roll-outs may be uninformative.
Results are presented on the classic Mountain Car reinforcement learning benchmark and also on a simplified version of Space Invaders. The results clearly demonstrate the value of the approach, significantly outperforming “standard” MCTS in each case. Furthermore, the adaptation is almost immediate, with no perceptual delay as the system learns: the agent frequently performs well from its very first game.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
Similar content being viewed by others
References
Alhejali, A., Lucas, S.: Using Genetic Programming to Evolve Heuristics for a MonteCarlo Tree Search Ms Pac-Man Agent, In: IEEE Conference on Computational Intelligence and Games, pp. 65–72 (2013)
Benbassat, A., Sipper, M.: EvoMCTS: Enhancing MCTS-Based Players through Genetic Programming, In: IEEE Conference on Computational Intelligence and Games, pp. 57–64 (2013)
Browne, C., Powley, E., Whitehouse, D., Lucas, S., Cowling, P., Rohlfshagen, P., Tavener, S., Perez, D., Samothrakis, S., Colton, S.: A Survey of Monte Carlo Tree Search Methods. IEEE Transactions on Computational Intelligence and AI in Games 4(1), 1–43 (2012)
Cook, M., Colton, S.: Multi-faceted Evolution of Simple Arcade Games. In: IEEE Conference on Computational Intelligence in Games (CIG), pp. 289–296 (2011)
Cook, M., Colton, S., Raad, A., Gow, J.: Mechanic Miner: Reflection-Driven Game Mechanic Discovery and Level Design. In: IEEE Conference on Computational Intelligence in Games (CIG), pp. 284–293 (2013)
Lucas, S.: Investigating learning rates for evolution and temporal difference learning. In: IEEE Symposium on Computational Intelligence and Games, CIG 2008, pp. 1–7 (December 2008)
Maes, F., St-Pierre, D., Ernst, D.: Monte Carlo Search Algorithm Discovery for Single-Player Games. IEEE Transactions on Computational Intelligence and AI in Games 5(3), 201–213 (2013)
Nguyen, K.Q., Thawonmas, R.: Monte Carlo Tree Search for Collaboration Control of Ghosts in Ms. Pac-Man. IEEE Transactions on Computational Intelligence and AI in Games 5(1), 57–68 (2013)
Pepels, T., Winands, M.: Enhancements for Monte-Carlo Tree Search in Ms Pac-Man. In: IEEE Conference on Computational Intelligence and Games (CIG), pp. 265–272 (2012)
Powley, E.J., Whitehouse, D., Cowling, P.I.: Bandits all the way down: UCB1 as a simulation policy in Monte Carlo Tree Search. In: IEEE Conference on Computational Intelligence in Games (CIG), pp. 81–88 (2013)
Robles, D., Rohlfshagen, P., Lucas, S.M.: Learning Non-Random Moves for Playing Othello: Improving Monte Carlo Tree Search. In: Proceedings IEEE Conf. Comput. Intell. Games, Seoul, pp. 305–312 (2011)
Samothrakis, S., Robles, D., Lucas, S.: Fast Approximate Max-n Monte Carlo Tree Search for Ms Pac-Man. IEEE Transactions on Computational Intelligence and AI in Games 3(2), 142–154 (2011)
Silver, D., Sutton, R.S., Müller, M.: Sample-Based Learning and Search with Permanent and Transient Memories. In: Proceedings 25th Annu. Int. Conf. Mach. Learn., pp. 968–975, Helsinki (2008)
Sutton R., Barto, A.: Introduction to Reinforcement Learning. MIT Press (1998)
Togelius, J., Schmidhuber, J.: An Experiment in Automatic Game Design. In: IEEE Symposium on Computational Intelligence and Games, pp. 111–118 (2008)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Lucas, S.M., Samothrakis, S., Pérez, D. (2014). Fast Evolutionary Adaptation for Monte Carlo Tree Search. In: Esparcia-Alcázar, A., Mora, A. (eds) Applications of Evolutionary Computation. EvoApplications 2014. Lecture Notes in Computer Science(), vol 8602. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-45523-4_29
Download citation
DOI: https://doi.org/10.1007/978-3-662-45523-4_29
Published:
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-45522-7
Online ISBN: 978-3-662-45523-4
eBook Packages: Computer ScienceComputer Science (R0)