Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Natural Gradient Interpretation of Rank-One Update in CMA-ES

  • Conference paper
  • First Online:
Parallel Problem Solving from Nature – PPSN XVIII (PPSN 2024)

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 15149))

Included in the following conference series:

  • 258 Accesses

Abstract

The covariance matrix adaptation evolution strategy (CMA-ES) is a stochastic search algorithm using a multivariate normal distribution for continuous black-box optimization. In addition to strong empirical results, part of the CMA-ES can be described by a stochastic natural gradient method and can be derived from information geometric optimization (IGO) framework. However, there are some components of the CMA-ES, such as the rank-one update, for which the theoretical understanding is limited. While the rank-one update makes the covariance matrix to increase the likelihood of generating a solution in the direction of the evolution path, this idea has been difficult to formulate and interpret as a natural gradient method unlike the rank-\(\mu \) update. In this work, we provide a new interpretation of the rank-one update in the CMA-ES from the perspective of the natural gradient with prior distribution. First, we propose maximum a posteriori IGO (MAP-IGO), which is the IGO framework extended to incorporate a prior distribution. Then, we derive the rank-one update from the MAP-IGO by setting the prior distribution based on the idea that the promising mean vector should exist in the direction of the evolution path. Moreover, the newly derived rank-one update is extensible, where an additional term appears in the update for the mean vector. We empirically investigate the properties of the additional term using various benchmark functions.

This is a preview of subscription content, log in via an institution to check access.

Access this chapter

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Chapter
USD 29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD 64.99
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD 79.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Institutional subscriptions

Similar content being viewed by others

Notes

  1. 1.

    The CMA-ES sometimes employs the indicator function \(h_\sigma \) to prevent evolution path \(\boldsymbol{p}^{(t)}_c\) from rapidly lengthening.

  2. 2.

    It should be noted that while the assumption \(\int _0^1 w(q) \textrm{d}q \ne 0\) usually holds in the CMA-ES, some instances of IGO, such as compact genetic algorithm [17], do not satisfy this. We will not pursue this limitation in depth as our focus is on the CMA-ES.

References

  1. Abdolmaleki, A., Price, B., Lau, N., Reis, L.P., Neumann, G.: Deriving and improving CMA-ES with information geometric trust regions. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2017, pp. 657-664. Association for Computing Machinery, New York (2017). https://doi.org/10.1145/3071178.3071252

  2. Abdolmaleki, A., Springenberg, J.T., Tassa, Y., Munos, R., Heess, N., Riedmiller, M.: Maximum a posteriori policy optimisation. In: International Conference on Learning Representations (2018)

    Google Scholar 

  3. Akimoto, Y., Nagata, Y., Ono, I., Kobayashi, S.: Bidirectional relation between CMA evolution strategies and natural evolution strategies. In: Parallel Problem Solving from Nature, PPSN XI, pp. 154–163 (2010)

    Google Scholar 

  4. Amari, S.I., Nagaoka, H.: Methods of Information Geometry, vol. 191 (2000)

    Google Scholar 

  5. Auger, A., Hansen, N.: A restart CMA evolution strategy with increasing population size. In: 2005 IEEE Congress on Evolutionary Computation, vol. 2, pp. 1769–1776. IEEE (2005)

    Google Scholar 

  6. Auger, A., Hansen, N.: A restart CMA evolution strategy with increasing population size, vol. 2, pp. 1769–1776 (2005). https://doi.org/10.1109/CEC.2005.1554902

  7. Baluja, S.: Population-based incremental learning: a method for integrating genetic search based function optimization and competitive learning (1994). https://api.semanticscholar.org/CorpusID:14799233

  8. Bishop, C.M., Nasrabadi, N.M.: Pattern Recognition and Machine Learning, vol. 4. Springer, Heidelberg (2006)

    Google Scholar 

  9. Hamano, R., Saito, S., Nomura, M., Shirakawa, S.: CMA-ES with margin: lower-bounding marginal probability for mixed-integer black-box optimization. In: Proceedings of the Genetic and Evolutionary Computation Conference, GECCO 2022, pp. 639-647. Association for Computing Machinery, New York (2022). https://doi.org/10.1145/3512290.3528827

  10. Hansen, N., Ostermeier, A.: Adapting arbitrary normal mutation distributions in evolution strategies: the covariance matrix adaptation. In: Proceedings of IEEE International Conference on Evolutionary Computation, pp. 312–317 (1996). https://doi.org/10.1109/ICEC.1996.542381

  11. Hansen, N.: Benchmarking a BI-population CMA-ES on the BBOB-2009 function testbed. In: Proceedings of the 11th Annual Conference Companion on Genetic and Evolutionary Computation Conference: Late Breaking Papers, GECCO 2009, pp. 2389-2396. Association for Computing Machinery, New York (2009). https://doi.org/10.1145/1570256.1570333

  12. Hansen, N.: A CMA-ES for mixed-integer nonlinear optimization (2011)

    Google Scholar 

  13. Hansen, N.: The CMA evolution strategy: a tutorial. arXiv preprint arXiv:1604.00772 (2016)

  14. Hansen, N., Auger, A.: Principled design of continuous stochastic search: from theory to practice, pp. 145–180 (2014). https://doi.org/10.1007/978-3-642-33206-7__8

  15. Hansen, N., Kern, S.: Evaluating the CMA evolution strategy on multimodal test functions. In: Yao, X., et al. (eds.) PPSN 2004. LNCS, vol. 3242, pp. 282–291. Springer, Heidelberg (2004). https://doi.org/10.1007/978-3-540-30217-9_29

    Chapter  Google Scholar 

  16. Hansen, N., Müller, S.D., Koumoutsakos, P.: Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evol. Comput. 11(1), 1–18 (2003). https://doi.org/10.1162/106365603321828970

    Article  Google Scholar 

  17. Harik, G., Lobo, F., Goldberg, D.: The compact genetic algorithm. IEEE Trans. Evol. Comput. 3(4), 287–297 (1999). https://doi.org/10.1109/4235.797971

    Article  Google Scholar 

  18. Li, Z., Zhang, Q.: What does the evolution path learn in CMA-ES? In: Parallel Problem Solving from Nature – PPSN XIV, pp. 751–760 (2016)

    Google Scholar 

  19. Nomura, M., Shibata, M.: cmaes: a simple yet practical python library for CMA-ES. arXiv preprint arXiv:2402.01373 (2024)

  20. Ollivier, Y., Arnold, L., Auger, A., Hansen, N.: Information-geometric optimization algorithms: a unifying picture via invariance principles. J. Mach. Learn. Res. 18(1), 564–628 (2017)

    MathSciNet  Google Scholar 

  21. Rios, L.M., Sahinidis, N.V.: Derivative-free optimization: a review of algorithms and comparison of software implementations. J. Global Optim. 56, 1247–1293 (2013)

    Article  MathSciNet  Google Scholar 

  22. Shirakawa, S., Akimoto, Y., Ouchi, K., Ohara, K.: Sample reuse in the covariance matrix adaptation evolution strategy based on importance sampling. In: Proceedings of the 2015 Annual Conference on Genetic and Evolutionary Computation, pp. 305–312 (2015)

    Google Scholar 

  23. Song, H.F., et al.: V-mpo: on-policy maximum a posteriori policy optimization for discrete and continuous control. In: International Conference on Learning Representations (2020). https://openreview.net/forum?id=SylOlp4FvH

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Ryoki Hamano .

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2024 The Author(s), under exclusive license to Springer Nature Switzerland AG

About this paper

Check for updates. Verify currency and authenticity via CrossMark

Cite this paper

Hamano, R., Shirakawa, S., Nomura, M. (2024). Natural Gradient Interpretation of Rank-One Update in CMA-ES. In: Affenzeller, M., et al. Parallel Problem Solving from Nature – PPSN XVIII. PPSN 2024. Lecture Notes in Computer Science, vol 15149. Springer, Cham. https://doi.org/10.1007/978-3-031-70068-2_16

Download citation

  • DOI: https://doi.org/10.1007/978-3-031-70068-2_16

  • Published:

  • Publisher Name: Springer, Cham

  • Print ISBN: 978-3-031-70067-5

  • Online ISBN: 978-3-031-70068-2

  • eBook Packages: Computer ScienceComputer Science (R0)

Publish with us

Policies and ethics