Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/3638530.3654322acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
poster

Searching for Benchmark Problem Instances from Data-Driven Optimisation

Published: 01 August 2024 Publication History

Abstract

Given the large number of existing metaheuristic optimisation algorithms, there has been an increasing focus on improving benchmarking practices, to gain an improved understanding empirical performance, as well as matching algorithms and problems. One important aspect of this more diversity in test problem sets so that algorithms are evaluated on a wider range of problem instances, including problems that are more real-world representative.
In this paper, we explore data-driven benchmarking problems. By perturbing the data, we perform meta-search over this space of problem instances. Specifically, we consider sum of squares clustering problems and find instances that test CMA-ES. This approach can be effective, even when the dataset is very simple. In addition, we show that simple clustering instances have interesting properties, including neutrality (missing from many commonly-used benchmarks). We argue that data-driven optimisation can provide useful insights into problem structure and algorithm behaviour.

References

[1]
T. Bartz-Beielstein et al. 2020. Benchmarking in optimization: Best practice and open issues. arXiv preprint arXiv:2007.03488 (2020).
[2]
P. Caamaño, F. Bellas, J. A. Becerra, and R. J. Duro. 2013. Evolutionary algorithm characterization in real parameter optimization problems. Applied Soft Computing 13, 4 (2013), 1902--1921.
[3]
G. Cenikj, R. D. Lang, A. P. Engelbrecht, C. Doerr, P. Korošec, and T. Eftimov. 2022. Selector: selecting a representative benchmark suite for reproducible statistical comparison. In Proc GECCO. 620--629.
[4]
K. Dietrich and O. Mersmann. 2022. Increasing the diversity of benchmark function sets through affine recombination. In International Conference on Parallel Problem Solving from Nature. 590--602.
[5]
A. Fischbach and T. Bartz-Beielstein. 2020. Improving the reliability of test functions generators. Applied Soft Computing 92 (2020), 106315.
[6]
M. Gallagher. 2016. Towards improved benchmarking of black-box optimization algorithms using clustering problems. Soft Computing 20, 10 (2016), 3835--3849.
[7]
M. Gallagher. 2019. Fitness landscape analysis in data-driven optimization: An investigation of clustering problems. In 2019 IEEE Congress on Evolutionary Computation (CEC). IEEE, 2308--2314.
[8]
M. Gallagher and B. Yuan. 2006. A general-purpose tunable landscape generator. IEEE transactions on evolutionary computation 10, 5 (2006), 590--603.
[9]
N. Hansen, A. Auger, R. Ros, O. Mersmann, T. Tušar, and D. Brockhoff. 2021. COCO: A Platform for Comparing Continuous Optimizers in a Black-Box Setting. Optimization Methods and Software 36 (2021), 114--144. Issue 1.
[10]
W. B. Langdon and R. Poli. 2007. Evolving problems to learn about particle swarm optimizers and other search algorithms. IEEE Transactions on Evolutionary Computation 11, 5 (2007), 561--578.
[11]
Y. Lou, S. Y. Yuen, and G. Chen. 2018. Evolving benchmark functions using kruskal-wallis test. In Proceedings of the Genetic and Evolutionary Computation Conference Companion. 1337--1341.
[12]
J. Rönkkönen, X. Li, V. Kyrki, and J. Lampinen. 2011. A framework for generating tunable test functions for multimodal optimization. Soft Computing 15 (2011), 1689--1706.
[13]
C. Shand, R. Allmendinger, J. Handl, A. Webb, and J. Keane. 2021. HAWKS: Evolving challenging benchmark sets for cluster analysis. IEEE Transactions on Evolutionary Computation 26, 6 (2021), 1206--1220.
[14]
R. Xu and D. Wunsch. 2005. Survey of clustering algorithms. IEEE Transactions on neural networks 16, 3 (2005), 645--678.
[15]
B. Yuan and M. Gallagher. 2004. Statistical racing techniques for improved empirical evaluation of evolutionary algorithms. In Proc. PPSN. 172--181.
[16]
M. Zaefferer and F. Rehbach. 2020. Continuous optimization benchmarks by simulation. In Proc. PPSN XVI: 16th International Conference, Part I. 273--286.

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GECCO '24 Companion: Proceedings of the Genetic and Evolutionary Computation Conference Companion
July 2024
2187 pages
ISBN:9798400704956
DOI:10.1145/3638530
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for third-party components of this work must be honored. For all other uses, contact the owner/author(s).

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 01 August 2024

Check for updates

Author Tags

  1. algorithm benchmarking
  2. continuous blackbox optimisation
  3. clustering
  4. fitness landscape analysis

Qualifiers

  • Poster

Conference

GECCO '24 Companion
Sponsor:

Acceptance Rates

Overall Acceptance Rate 1,669 of 4,410 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • 0
    Total Citations
  • 22
    Total Downloads
  • Downloads (Last 12 months)22
  • Downloads (Last 6 weeks)3
Reflects downloads up to 23 Dec 2024

Other Metrics

Citations

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Media

Figures

Other

Tables

Share

Share

Share this Publication link

Share on social media