Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3512290.3528760acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Learning rate adaptation by line search in evolution strategies with recombination

Published: 08 July 2022 Publication History

Abstract

In this paper, we investigate the effect of a learning rate for the mean in Evolution Strategies with recombination. We study the effect of a half-line search after the mean shift direction is established, hence the learning rate value is conditioned to the direction. We prove convergence and study convergence rates in different dimensions and for different population sizes on the sphere function with the step-size proportional to the distance to the optimum.
We empirically find that a perfect half-line search increases the maximal convergence rate on the sphere function by up to about 70%, assuming the line search imposes no additional costs. The speedup becomes less pronounced with increasing dimension. The line search reduces---however does not eliminate---the dependency of the convergence rate on the step-size. The optimal step-size assumes considerably smaller values with line search, which is consistent with previous results for different learning rate settings. The step-size difference is more pronounced in larger dimension and with larger population size, thereby diminishing an important advantage of a large population.

References

[1]
Youhei Akimoto, Anne Auger, Tobias Glasmachers, and Daiki Morinaga. Global linear convergence of evolution strategies on more than smooth strongly convex functions. SIAM Journal on Optimization, 2022 (accepted).
[2]
Youhei Akimoto, Anne Auger, and Nikolaus Hansen. Quality gain analysis of the weighted recombination evolution strategy on general convex quadratic functions. Theoretical Computer Science, 832:42--67, 2020.
[3]
Dirk V Arnold. Weighted multirecombination evolution strategies. Theoretical computer science, 361(1):18--37, 2006.
[4]
Anne Auger, Dimo Brockhoff, and Nikolaus Hansen. Mirrored sampling in evolution strategies with weighted recombination. In Proceedings of the 13th annual conference on Genetic and evolutionary computation, pages 861--868, 2011.
[5]
Anne Auger and Nikolaus Hansen. Reconsidering the progress rate theory for evolution strategies in finite dimensions. In Proceedings of the 8th annual conference on genetic and evolutionary computation GECCO, pages 445--452. ACM, 2006.
[6]
Anne Auger and Nikolaus Hansen. Theory of evolution strategies: a new perspective. In A. Auger and B. Doerr, editors, Theory of Randomized Search Heuristics: Foundations and Recent Developments, chapter 10, pages 289--325. World Scientific Publishing, 2011.
[7]
Anne Auger and Nikolaus Hansen. Linear convergence on positively homogeneous functions of a comparison based step-size adaptive randomized search: the (1+1) ES with generalized one-fifth success rule. arXiv:1310.8397 [cs.NA], 2013.
[8]
Anne Auger and Nikolaus Hansen. Linear convergence of comparison-based step-size adaptive randomized search via stability of markov chains. SIAM Journal on Optimization, 26(3):1589--1624, 2016.
[9]
Hans-Georg Beyer. Mutate large, but inherit small! On the analysis of rescaled mutations in (1,λ)-ES with noisy fitness data. In International Conference on Parallel Problem Solving from Nature, pages 109--118. Springer, 1998.
[10]
Armand Gissler, Anne Auger, and Nikolaus Hansen. Supplementary material for Learning rate adaptation by line search in evolution strategies with recombination. hal-03626292, 2022.
[11]
Nikolaus Hansen and Andreas Ostermeier. Completely derandomized self-adaptation in evolution strategies. Evolutionary computation, 9(2):159--195, 2001.
[12]
Jens Jägersküpper. Lower bounds for hit-and-run direct search. In Juraj Hromkovič, Richard Královič, Marc Nunkesser, and Peter Widmayer, editors, Stochastic Algorithms: Foundations and Applications, volume 4665, pages 118--129, Berlin, Heidelberg, 2007. Springer.
[13]
Mohamed Jebalia and Anne Auger. Log-linear convergence of the scale-invariant (μ/μw, λ)-ES and optimal μ for intermediate recombination for large population sizes. In International Conference on Parallel Problem Solving from Nature, pages 52--62. Springer, 2010.
[14]
Ingo Rechenberg. Evolutionsstrategie'94, volume 1581. Frommann-Holzboog-Verlag, Stuttgart (Germany), 1994.
[15]
Sebastian U Stich, Christian L Muller, and Bernd Gartner. Optimization of convex functions with random pursuit. SIAM Journal on Optimization, 23(2):1284--1309, 2013.
[16]
Cheikh Touré, Anne Auger, and Nikolaus Hansen. Global linear convergence of evolution strategies with recombination on scaling-invariant functions. Journal of Global Optimization, 2022 (under revision).
[17]
Cheikh Touré, Armand Gissler, Anne Auger, and Nikolaus Hansen. Scaling-invariant functions versus positively homogeneous functions. Journal of Optimization Theory and Applications, 191:363--383, 2021.

Cited By

View all
  • (2023)CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?Proceedings of the Genetic and Evolutionary Computation Conference10.1145/3583131.3590358(839-847)Online publication date: 15-Jul-2023

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GECCO '22: Proceedings of the Genetic and Evolutionary Computation Conference
July 2022
1472 pages
ISBN:9781450392372
DOI:10.1145/3512290
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 08 July 2022

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. convergence rate
  2. evolution strategy
  3. line search

Qualifiers

  • Research-article

Conference

GECCO '22
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,669 of 4,410 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)13
  • Downloads (Last 6 weeks)0
Reflects downloads up to 12 Jan 2025

Other Metrics

Citations

Cited By

View all
  • (2023)CMA-ES with Learning Rate Adaptation: Can CMA-ES with Default Population Size Solve Multimodal and Noisy Problems?Proceedings of the Genetic and Evolutionary Computation Conference10.1145/3583131.3590358(839-847)Online publication date: 15-Jul-2023

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media