Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
skip to main content
10.1145/2908961.2931704acmconferencesArticle/Chapter ViewAbstractPublication PagesgeccoConference Proceedingsconference-collections
research-article

Benchmarking the Pure Random Search on the Bi-objective BBOB-2016 Testbed

Published: 20 July 2016 Publication History

Abstract

The Comparing Continuous Optimizers platform COCO has become a standard for benchmarking numerical (single-objective) optimization algorithms effortlessly. In 2016, COCO has been extended towards multi-objective optimization by providing a first bi-objective test suite. To provide a baseline, we benchmark a pure random search on this bi-objective family bbob-biobj test suite of the COCO platform. For each combination of function, dimension n, and instance of the test suite, 106n candidate solutions are sampled uniformly within the sampling box [-5,5]n.

References

[1]
A. Auger, D. Brockhoff, N. Hansen, D. Tušar, T. Tušar, and T. Wagner. The impact of search volume on the performance of RANDOMSEARCH on the bi-objective BBOB-2016 test suite. In GECCO (Companion), 2016. to appear.
[2]
A. Auger and R. Ros. Benchmarking the pure random search on the BBOB-2009 testbed. In F. Rothlauf, editor, GECCO (Companion), pages 2479--2484. ACM, 2009.
[3]
D. Brockhoff, T. Tušar, D. Tušar, T. Wagner, N. Hansen, and A. Auger. Biobjective performance assessment with the COCO platform. ArXiv e-prints, arXiv:1605.01746, 2016.
[4]
S. H. Brooks. A discussion of random methods for seeking maxima. Operations Research, 6(2):244--251, 1958.
[5]
N. Hansen, A. Auger, D. Brockhoff, D. Tušar, and T. Tušar. COCO: Performance assessment. ArXiv e-prints, arXiv:1605.03560, 2016.
[6]
N. Hansen, A. Auger, O. Mersmann, T. Tušar, and D. Brockhoff. COCO: A platform for comparing continuous optimizers in a black-box setting. ArXiv e-prints, arXiv:1603.08785, 2016.
[7]
N. Hansen, T. Tu\v sar, O. Mersmann, A. Auger, and D. Brockhoff. COCO: The experimental procedure. ArXiv e-prints, arXiv:1603.08776, 2016.
[8]
T. Tušar, D. Brockhoff, N. Hansen, and A. Auger. COCO: The bi-objective black-box optimization benchmarking (bbob-biobj) test suite. ArXiv e-prints, arXiv:1604.00359, 2016.

Cited By

View all
  • (2017)Quantitative Performance Assessment of Multiobjective Optimizers9th International Conference on Evolutionary Multi-Criterion Optimization - Volume 1017310.1007/978-3-319-54157-0_8(103-119)Online publication date: 19-Mar-2017
  • (2016)The Impact of Variation Operators on the Performance of SMS-EMOA on the Bi-objective BBOB-2016 Test SuiteProceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion10.1145/2908961.2931705(1225-1232)Online publication date: 20-Jul-2016

Recommendations

Comments

Information & Contributors

Information

Published In

cover image ACM Conferences
GECCO '16 Companion: Proceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion
July 2016
1510 pages
ISBN:9781450343237
DOI:10.1145/2908961
Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than the author(s) must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected].

Sponsors

Publisher

Association for Computing Machinery

New York, NY, United States

Publication History

Published: 20 July 2016

Permissions

Request permissions for this article.

Check for updates

Author Tags

  1. benchmarking
  2. bi-objective optimization
  3. black-box optimization

Qualifiers

  • Research-article

Funding Sources

  • ANR

Conference

GECCO '16
Sponsor:
GECCO '16: Genetic and Evolutionary Computation Conference
July 20 - 24, 2016
Colorado, Denver, USA

Acceptance Rates

GECCO '16 Companion Paper Acceptance Rate 137 of 381 submissions, 36%;
Overall Acceptance Rate 1,669 of 4,410 submissions, 38%

Contributors

Other Metrics

Bibliometrics & Citations

Bibliometrics

Article Metrics

  • Downloads (Last 12 months)3
  • Downloads (Last 6 weeks)0
Reflects downloads up to 17 Feb 2025

Other Metrics

Citations

Cited By

View all
  • (2017)Quantitative Performance Assessment of Multiobjective Optimizers9th International Conference on Evolutionary Multi-Criterion Optimization - Volume 1017310.1007/978-3-319-54157-0_8(103-119)Online publication date: 19-Mar-2017
  • (2016)The Impact of Variation Operators on the Performance of SMS-EMOA on the Bi-objective BBOB-2016 Test SuiteProceedings of the 2016 on Genetic and Evolutionary Computation Conference Companion10.1145/2908961.2931705(1225-1232)Online publication date: 20-Jul-2016

View Options

Login options

View options

PDF

View or Download as a PDF file.

PDF

eReader

View online with eReader.

eReader

Figures

Tables

Media

Share

Share

Share this Publication link

Share on social media