Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

Parallel Sequential Random Embedding Bayesian Optimization

  • Original Research
  • Published:
SN Computer Science Aims and scope Submit manuscript

Abstract

Bayesian optimization, which offers efficient parameter search, suffers from high computation cost if the parameters have high dimensionality because the search space expands and more trials are needed. One existing solution is an embedding method that enables the search to be restricted to a low-dimensional subspace, but this method works well only when the number of embedding dimensions closely matches the number of effective dimensions, which affects the function value. However, in practical situations, the number of effective dimensions is unknown, and using a low dimensional subspace to lower computation costs often results in less effective searches. This study proposes a Bayesian optimization method that uses random embedding that remains efficient even if the embedded dimension is lower than the effective dimensions. By conducting parallel search in an initially low dimensional space and performing multiple cycles in which the search space is incrementally improved, the optimum solution can be efficiently found. The proposed method is challenged in experiments on benchmark problems, the results of which confirm its effectiveness.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14

Similar content being viewed by others

Explore related subjects

Discover the latest articles, news and stories from top researchers in related subjects.

Notes

  1. An earlier version of this work was presented at Asian Conference on Pattern Recognition (ACPR) [7].

  2. We also checked the processing time of each method. PSRE and SRE have similar times, while RE is slower and BO is about ten times slower than PSRE.

References

  1. Snoek J, Larochelle H, Adams RP. Practical Bayesian optimization of machine learning algorithms. In: Advances in Neural Information Processing Systems (NIPS), 2012. Curran Associates Inc. pp. 2951–2959.

  2. Kiyotake H, Kohjima M, Matsubayashi T, Toda H. Multi Agent Flow Estimation Based on Bayesian Optimization with Time Delay and Low Dimensional Parameter Conversion. In: Principles and Practice of Multi-Agent Systems (PRIMA), 2018. pp. 53–69.

  3. Brochu E, Cora VM, de Freitas N. A tutorial on Bayesian optimization of expensive cost functions, with application to active user modeling and hierarchical reinforcement learning 2010. http://arXiv.org/abs/1012.2599

  4. Jones D, Schonlau M, Welch W. Efficient global optimization of expensive black-box functions. J Glob Optimiz. 1998;13(4):455–92.

    Article  MathSciNet  Google Scholar 

  5. Rasmussen CE, Williams CK. Gaussian process for machine learning. Cambridge: MIT Press; 2006.

    MATH  Google Scholar 

  6. Wang Z, Hutter F, Zoghi M, Matheson D, de Freitas N. Bayesian optimization in a billion dimensions via random embeddings. JAIR. 2016;55:361–87.

    Article  MathSciNet  Google Scholar 

  7. Yokoyama N, Kohjima M, Matsubayashi T, Toda H. Efficient Bayesian optimization based on parallel sequential random embeddings. In: Asian Conference on Pattern Recognition (ACPR), 2019. pp. 453–466.

  8. Kandasamy K, Krishnamurthy A, Schneider J, Poczos B. Parallelised bayesian optimisation via thompson sampling. In: Proceedings of the Twenty-First International Conference on Artificial Intelligence and Statistics (AISTATS), vol. 84, 2018. pp. 133–142. PMLR.

  9. Kandasamy K, Schneider J, Poczos B. High dimensional bayesian optimisation and bandits via additive models. In: Proceedings of the 32nd International Conference on Machine Learning (ICML), vol. 37, 2015. pp. 295–304.

  10. Rolland P, Scarlett J, Bogunovic I, Cevher V. High dimensional Bayesian optimization via additive models with overlapping groups. In: International Conference on Artificial Intelligence and Statistics (AISTATS), 2018. pp. 298–307.

  11. Qian H, Hu Y, Yu Y. Derivative-free optimization of high-dimensional non-convex functions by sequential random embeddings. In: Proceedings of the Twenty-Fifth International Joint Conference on Artificial Intelligence (IJCAI), 2016. AAAI Press, New York. pp. 1946–1952.

  12. Bull AD. Convergence rates of efficient global optimization algorithms. J Mach Learn Res. 2011;12:2879–904.

    MathSciNet  MATH  Google Scholar 

  13. Vazquez E, Bect J. Convergence properties of the expected improvement algorithm with fixed mean and covariance functions. J Stat Plan Inference. 2010;140(11):3088–95.

    Article  MathSciNet  Google Scholar 

  14. Mockus J. Bayesian approach to global optimization: theory and applications, vol. 37. Berlin: Springer Science & Business Media; 2012.

    MATH  Google Scholar 

  15. Davis L. Handbook of genetic algorithms. New York: Van Nostrand Reinhold; 1991.

    Google Scholar 

  16. Hansen N, Muller SD, Koumoutsakos P. Reducing the time complexity of the derandomized evolution strategy with covariance matrix adaptation (CMA-ES). Evolut Comput. 2003;11(1):1–18.

    Article  Google Scholar 

  17. Hansen N, Auger A, Ros R, Finck S, Posik P. Comparing results of 31 algorithms from the black-box optimization benchmarking BBOB-2009. In: Proceedings of the 12th Annual Conference Companion on Genetic and Evolutionary Computation (GECCO), 2010. ACM, New York. pp. 1689–1696.

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Noriko Yokoyama.

Ethics declarations

Conflict of interest

The authors declare that there is no conflict of interest.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

This article is part of the topical collection “Machine Learning in Pattern Analysis” guest edited by Reinhard Klette, Brendan McCane, Gabriella Sanniti di Baja, Palaiahnakote Shivakumara and Liang Wang.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Yokoyama, N., Kohjima, M., Matsubayashi, T. et al. Parallel Sequential Random Embedding Bayesian Optimization. SN COMPUT. SCI. 2, 3 (2021). https://doi.org/10.1007/s42979-020-00385-8

Download citation

  • Received:

  • Accepted:

  • Published:

  • DOI: https://doi.org/10.1007/s42979-020-00385-8

Keywords