Location via proxy:   [ UP ]  
[Report a bug]   [Manage cookies]                
Skip to main content

On the impact of prior distributions on efficiency of sparse Gaussian process regression

  • Original Article
  • Published:
Engineering with Computers Aims and scope Submit manuscript

Abstract

Gaussian process regression (GPR) is a kernel-based learning model, which unfortunately suffers from computational intractability for irregular domain and large datasets due to the full kernel matrix. In this paper, we propose a novel method to produce a sparse kernel matrix using the compact support radial kernels (CSRKs) to efficiently learn the GPR from large datasets. The CSRKs can effectively avoid the ill-conditioned and full kernel matrix during GPR training and prediction, consequently reducing computational costs and memory requirements. In practice, the interest in CSRKs waned slightly as it became evident that, there is a trade-off principle (conflict between accuracy and sparsity) for compactly supported kernels. Hence, when using kernels with compact support, during GPR training, the main focus will be on providing a high level of accuracy. In this case, the advantage of achieving a sparse covariance matrix for CSRKs will almost disappear, as we will see in the numerical results. This trade-off has led authors to search for an “optimal” value of the scale parameter. Accordingly, by selecting the suitable priors on the kernel hyperparameters, and simply estimating the hyperparameters using a modified version of the maximum likelihood estimation (MLE), the GPR model derived from the CSRKs yields maximal accuracy while still maintaining a sparse covariance matrix. In fact, in GPR training, modified version of the MLE will be proportional to the product of MLE and a given suitable prior distribution for the hyperparameters that provides an efficient method for learning. The misspecification of prior distributions and their impact on the predictability of the sparse GPR models are also comprehensively investigated using several empirical studies. The proposed new approach is applied to some irregular domains with noisy test functions in 2D data sets in a comparative study. We finally investigate the effect of prior on the predictability of GPR models based on the real dataset. The derived results suggest the proposed method leads to more sparsity and well-conditioned kernel matrices in all cases.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Subscribe and save

Springer+ Basic
$34.99 /Month
  • Get 10 units per month
  • Download Article/Chapter or eBook
  • 1 Unit = 1 Article or 1 Chapter
  • Cancel anytime
Subscribe now

Buy Now

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7
Fig. 8
Fig. 9
Fig. 10
Fig. 11
Fig. 12
Fig. 13
Fig. 14
Fig. 15
Fig. 16
Fig. 17
Fig. 18
Fig. 19

Similar content being viewed by others

References

  1. Bouhlel MA, Martins JRRA (2019) Gradient-enhanced kriging for high-dimensional problems. Eng Comput 35:157–173

    Article  Google Scholar 

  2. Zhang X, Pandey MD (2021) HALK: A hybrid active-learning Kriging approach and its applications for structural reliability analysis. Eng Comput. https://doi.org/10.1007/s00366-021-01308-8

    Article  Google Scholar 

  3. Gao W, Karbasi M, Hasanipanah M, Zhang X, Guo J (2018) Developing GPR model for forecasting the rock fragmentation in surface mines. Eng Comput 34:339–345

    Article  Google Scholar 

  4. Arthur CK, Temeng VA, Ziggah YY (2020) Novel approach to predicting blast-induced ground vibration using Gaussian process regression. Eng Comput 36:29–42

    Article  Google Scholar 

  5. Isaaks E, Srivastava R (2011) Applied geostatistics. Oxford University, London

    Google Scholar 

  6. Rasmussen CE (1999) Evaluation of Gaussian Processes and Other Methods for Non-Linear Regression. University of Toronto, Toronto

    Google Scholar 

  7. Rasmussen CE, Williams CKI (2006) Gaussian Processes for Machine Learning. MIT Press, Boston

    MATH  Google Scholar 

  8. Chatrabgoun O, Esmaeilbeigi M, Cheraghi M, Daneshkhah A (2022) Stable Likelihood Computation for Machine Learning of Linear Differential Operators with Gaussian Processes. Int J Uncertain Quantif 12:75–99

    Article  MathSciNet  MATH  Google Scholar 

  9. Shamshirband S, Goci’c M, Petkovi’c D, Saboohi H, Herawan T, Kiah MLM, Akib S (2015) Soft-computing methodologies for precipitation estimation: a case study. IEEE J Sel Top Appl Earth Observ Remote Sens 8:1353–1358

    Article  Google Scholar 

  10. Shamshirband S, Mohammadi K, Yee L, Petkovi’c D, Mostafaeipour A (2015) A comparative evaluation for identifying the suitability of extreme learning machine to predict horizontal global solar radiation. Renew Sustain Energy Rev 52:1031–1042

    Article  Google Scholar 

  11. MacKay DJ (1997) Gaussian processes a replacement for supervised neural networks?. Tutorial lecture notes for NIPS 1997;

  12. Wilson AG, Adams RP (2013) Gaussian process kernels for pattern discovery and extrapolation. In: Proceedings of the 30th International Conference on Machine Learning (ICML-13). 1067–1075

  13. Fasshauer GE, McCourt MJ (2015) Kernel-based Approximation Methods using Matlab. World Scientific, Singapore

    Book  MATH  Google Scholar 

  14. Csato L, Opper M (2002) Sparse online Gaussian processes. Neural Comput 14:641–668

    Article  MATH  Google Scholar 

  15. Quinonero-Candela J, Rasmussen CE (2005) A unifying view of sparse approximate Gaussian process regression. J Mach Learn Res 6:1939–1959

    MathSciNet  MATH  Google Scholar 

  16. Snelson E, Ghahramani Z (2007) Local and global sparse Gaussian process approximations. Artificial Intelligence and Statistics.11

  17. Williams CKI, Seeger M (2001) Using the Nystrom method to speed up kernel machines. In: Leen TK, Dietterich TG, Tresp V, editors, Advances in Neural Information Processing Systems, The MIT Press. 13

  18. Quiñonero-Candela J, Rasmussen CE, Williams CKI (2007) Approximation methods for Gaussian process regression. Large-Scale Kernel Machines. MIT Press, Cambridge, pp 203–224

    Chapter  Google Scholar 

  19. Schreiter J, Nguyen-Tuong D, Toussaint M (2016) Efficient sparsification for Gaussian process regression. Neurocomputing 192:29–37

    Article  Google Scholar 

  20. Tresp V (2000) A Bayesian committee machine. Neural Comput 12:2719–2741

    Google Scholar 

  21. Ranganathan A, Yang MH, Ho J (2011) Online Sparse Gaussian Process Regression and Its Applications. IEEE Trans Image Process 20:391–404

    Article  MathSciNet  MATH  Google Scholar 

  22. Brahim-Belhouari S, Bermak A (2004) Gaussian process for nonstationary time series prediction. Comput Stat Data Anal 47:705–712

    Article  MathSciNet  MATH  Google Scholar 

  23. MacKay DJ (1998) Introduction to Gaussian processes. NATO ASI Series F Computer and Systems Sciences. 168:133–166

    MATH  Google Scholar 

  24. Neal RM Monte carlo implementation of Gaussian process models for bayesian regression and classification. arXiv preprint physics/9701026

  25. Williams CKI, Rasmussen CE (1996) Gaussian processes for regression. In: Advances in Neural Information Processing Systems. 514–520

  26. Bachoc F (2013) Cross Validation and Maximum Likelihood estimations of hyper-parameters of Gaussian processes with model misspecification. Comput Stat Data Anal 66:55–69

    Article  MathSciNet  MATH  Google Scholar 

  27. Butler A, Haynes RD, Humphries TD, Ranjan P (2014) Efficient optimization of the likelihood function in Gaussian process modelling. Comput Stat Data Anal 73:40–52

    Article  MathSciNet  MATH  Google Scholar 

  28. Chen Z, Wang B (2018) How priors of initial hyperparameters affect Gaussian process regression models. Neurocomputing 275:1702–1710

    Article  Google Scholar 

  29. Heyde CC (1997) Quasi-Likelihood and its Application. Springer, New York

    Book  MATH  Google Scholar 

  30. Kitanidis PK (1997) Introduction to Geostatistics: Applications in Hydrology. Cambridge University Press, New York

    Book  Google Scholar 

  31. Majdisova Z, Skala V (2017) Radial basis function approximations: comparison and applications. Appl Math Model 51:728–743

    Article  MathSciNet  MATH  Google Scholar 

  32. Flaxman S, Gelman A, Neill D, Smola A, Vehtari A, Wilson AG (2015) Fast hierarchical gaussian processes

  33. Fasshauer GE (2007) Meshfree Approximation Methods with Matlab. World Scientific, Singapore

    Book  MATH  Google Scholar 

  34. Schoenberg I (1938) Metric spaces and completely monotone functions. Ann Math 39:811–841

    Article  MathSciNet  MATH  Google Scholar 

  35. Wendland H (1995) Piecewise polynomial, positive definite and compactly supported radial basis functions of minimal degree. Adv Comput Math 4:389–396

    Article  MathSciNet  MATH  Google Scholar 

  36. Multivariate WZ (1995) Functions CSPDR. Adv Comput Math 4:283–292

    MathSciNet  Google Scholar 

  37. Schaback R (1995) Creating surfaces from scattered data using radial basis functions. Vanderbilt University Press, Nashville, Mathematical methods for curves and surfaces, pp 477–496

    MATH  Google Scholar 

  38. Wendland H (2005) Scattered data approximation. Cambridge University Press, Cambridge

    MATH  Google Scholar 

  39. Wilson AG (2014) Covariance Kernels for Fast Automatic Pattern Discovery and Extrapolation with Gaussian Processes. Ph.D. Thesis, University of Cambridge

  40. Watkins DS (2010) Fundamentals of Matrix Computations. Wiley Series in Pure and Applied Mathematics;

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Mohsen Esmaeilbeigi.

Additional information

Publisher's Note

Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Esmaeilbeigi, M., Chatrabgoun, O., Daneshkhah, A. et al. On the impact of prior distributions on efficiency of sparse Gaussian process regression. Engineering with Computers 39, 2905–2925 (2023). https://doi.org/10.1007/s00366-022-01686-7

Download citation

  • Received:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s00366-022-01686-7

Keywords